query
dict
pos
dict
neg
dict
{ "doc_id": "17280413", "title": "Deep Convolutional Neural Network for 6-DOF Image Localization", "abstract": "Wee present an accurate and robust method for six degree of freedom image localization. There are two key-points of our method, 1). automatic immense photo synthesis and labeling from point cloud model and, 2). pose estimation with deep convolutional neural networks (ConvNets) regression. Our model can directly regresses 6-DOF camera poses from images, accurately describing where and how it was captured. We achieved an accuracy within 1 meters and 1 degree on our outdoor dataset, which covers about 20, 000m on our school campus. Unlike previous point cloud registration solutions, our model supports low resolution images (i.e. 224×224 in our settings), and is tiny in size when finished training. Moreover, in pose estimation, our model uses O(1) time & space complexity as trainset grows. We will show the importance to localization using hundreds of thousands of generated and self-labeled ”photos” came from a short video. We will show our model’s robustness despite of illumination and seasonal variances, which usually fails methods that leverage image feature descriptors like SIFT. Furthermore, we will show the ability of transfer our model trained on one scene to another, and the gains in accuracy and efficiency.", "sha": "5b0f9417de6b616199c6bd15b3ca552d46973de8", "corpus_id": 17280413 }
{ "doc_id": "11836057", "title": "Fast image-based localization using direct 2D-to-3D matching", "abstract": "Recently developed Structure from Motion (SfM) reconstruction approaches enable the creation of large scale 3D models of urban scenes. These compact scene representations can then be used for accurate image-based localization, creating the need for localization approaches that are able to efficiently handle such large amounts of data. An important bottleneck is the computation of 2D-to-3D correspondences required for pose estimation. Current stateof- the-art approaches use indirect matching techniques to accelerate this search. In this paper we demonstrate that direct 2D-to-3D matching methods have a considerable potential for improving registration performance. We derive a direct matching framework based on visual vocabulary quantization and a prioritized correspondence search. Through extensive experiments, we show that our framework efficiently handles large datasets and outperforms current state-of-the-art methods.", "sha": "8a53bf8bd6e6c42a2e1b89020ab638ba840df17c", "corpus_id": 11836057 }
{ "doc_id": "32550239", "title": "Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories", "abstract": "Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a a c Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "sha": "ed9db7b20e019cdb1c7db8b7921221ee2d9f36e2", "corpus_id": 32550239 }
{ "doc_id": "13061197", "title": "Analysis of Functional MRI Data Using Mutual Information", "abstract": "A new information theoretic approach is presented for ana lyzing fMRI data to calculate the brain activation map The method is based on a formulation of the mutual information between two waveforms the fMRI temporal response of a voxel and the experimental protocol timeline Scores based on mutual information are generated for all voxels and then used to compute the activation map of an experiment Mutual information for fMRI analysis is employed because it has been shown to be robust in quantifying the relationship between any two waveforms More importantly our technique takes a principled approach toward cal culating the brain activation map by making few assumptions about the relationship between the protocol timeline and the temporal response of a voxel This is important especially in fMRI experiments where little is known about the relationship between these two waveforms Experiments are presented to demonstrate this approach of computing the brain acti vation map Comparisons to other more traditional analysis techniques are made and the results are presented This work was supported by ONR grant N J and by subcontract GC NGD from Boston University under the AFOSR Multidisciplinary Re search Program on Reduced Signature Target Recognition", "sha": "61def1f842da4888dde59dbb918e37a2a5f01cb9", "corpus_id": 13061197 }
{ "doc_id": "2430413", "title": "Multi-modal volume registration by maximization of mutual information", "abstract": "A new information-theoretic approach is presented for finding the registration of volumetric medical images of differing modalities. Registration is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized. In our derivation of the registration procedure, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can foreseeably be used with a wide variety of imaging devices. This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron-emission tomography (PET) images. Surgical applications of the registration method are described.", "sha": "12315e202fc930628561bb549f17a634711a1c3a", "corpus_id": 2430413 }
{ "doc_id": "5695116", "title": "Time Series Based Link Prediction", "abstract": "Link prediction is a task in Social Network Analysis that consists of predicting connections that are most likely to appear considering previous observed links in a social network. The majority of works in this area only performs the task by exploring the state of the network at a specific moment to make the prediction of new links, without considering the behavior of links as time goes by. In this light, we investigate if temporal information can bring any performance gain to the link prediction task. A traditional approach for link prediction uses a chosen topological similarity metric on non-connected pairs of nodes of the network at present time to obtain a score that is going to be used by an unsupervised or a supervised method for link prediction. Our approach initially consists of building time series for each pair of non-connected nodes by computing their similarity scores at different past times. Then, we deploy a forecasting model on these time series and use their forecasts as the final scores of the pairs. Our preliminary results using two link prediction methods (unsupervised and supervised) on co-authorship networks revealed satisfactory results when temporal information was considered.", "sha": "8941d2690c14868e5dcc150b83a84a3beca6395d", "corpus_id": 5695116 }
{ "doc_id": "604631", "title": "Structure and motion estimation from rolling shutter video", "abstract": "The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling-shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion.", "sha": "3f3a2f09680d7f7b812bbf65f389de2041da335b", "corpus_id": 604631 }
{ "doc_id": "16328480", "title": "Removing rolling shutter wobble", "abstract": "We present an algorithm to remove wobble artifacts from a video captured with a rolling shutter camera undergoing large accelerations or jitter. We show how estimating the rapid motion of the camera can be posed as a temporal super-resolution problem. The low-frequency measurements are the motions of pixels from one frame to the next. These measurements are modeled as temporal integrals of the underlying high-frequency jitter of the camera. The estimated high-frequency motion of the camera is then used to re-render the sequence as though all the pixels in each frame were imaged at the same time. We also present an auto-calibration algorithm that can estimate the time between the capture of subsequent rows in the camera.", "sha": "70614845363764ac3a68a2fb25c91c3fa22f139f", "corpus_id": 16328480 }
{ "doc_id": "16960812", "title": "Graph stream classification using labeled and unlabeled graphs", "abstract": "Graph classification is becoming increasingly popular due to the rapidly rising applications involving data with structural dependency. The wide spread of the graph applications and the inherent complex relationships between graph objects have made the labels of the graph data expensive and/or difficult to obtain, especially for applications involving dynamic changing graph records. While labeled graphs are limited, the copious amounts of unlabeled graphs are often easy to obtain with trivial efforts. In this paper, we propose a framework to build a stream based graph classification model by combining both labeled and unlabeled graphs. Our method, called gSLU, employs an ensemble based framework to partition graph streams into a number of graph chunks each containing some labeled and unlabeled graphs. For each individual chunk, we propose a minimum-redundancy subgraph feature selection module to select a set of informative subgraph features to build a classifier. To tackle the concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the subgraph feature selection can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-world graph streams demonstrate clear benefits of using minimum-redundancy subgraph features to build accurate classifiers. By employing instance level weighting, our graph ensemble model can effectively adapt to the concept drifting in the graph stream for classification.", "sha": "3d65ebf663789292fe9042a24907bada295c5d2a", "corpus_id": 16960812 }
{ "doc_id": "8062123", "title": "Shoe-last design innovation for better shoe fitting", "abstract": "Shoe-last, a 3D mould used for making footwear, influence the shape, size and fitting of footwear. Current shoe-last design software has focused mainly on reverse engineering of existing shoe-last and modification. Shoe-last designers have generally preferred to design the shoe-last manually due to limitations of design software. In order to solve these problems, a new software based on CATIA platform was developed. The shoe-last model is based on foot shape measurement data and foot biomechanics. Using the existing shoe-last design standards and the sections from existing shoe-lasts, design tables and relationship equations enables the design of shoe-last with different toe type, heel height and custom shoe-last. The design includes comfort and fit aspects as well as design aspect, therefore enables design of aesthetical comfortable shoes. Since the design can be modified instantaneously, the designers could visualize design changes leading to a reduction in shoe-last design cycle. 2009 Elsevier B.V. All rights reserved.", "sha": "6064c0b5b7b6b33b40261b312abad869f9eb998b", "corpus_id": 8062123 }
{ "doc_id": "166971", "title": "Modeling wrinkles on smooth surfaces for footwear design", "abstract": "We describe two new shape operators that superimpose wrinkles on top of a smooth NURBS surface. Previous research studying wrinkles focused mostly on cloth modeling or in animations, which are driven more by visual realism, but allow large elastic deformations. Our operators generate wrinkle-shaped deformations in a region of a smooth surface along a given boundary based on a few basic parametric inputs such as wrinkle magnitude and extent (these terms will be defined in the paper). The essential geometric transformation to map the smooth surface to a wrinkled one will be defined purely in terms of the geometry of the surface and the input parameters. Our model is based on two surface properties: geodesic offsets and surface energy. Practical implementation of the operators is discussed, and examples presented. Finally, the motivation for the operators will be given through their application in the computer-aided design and manufacture of footwear.", "sha": "6e35df7addc7fb788edf78f4d925684528de3f7e", "corpus_id": 166971 }
{ "doc_id": "7429206", "title": "Wires: A Geometric Deformation Technique", "abstract": "Finding effective interactive deformation techniques for complex geometric objects continues to be a challenging problem in modeling and animation. We present an approach that is inspired by armatures used by sculptors, in which wire curves give definition to an object and shape its deformable features. We also introduce domain curvesthat define the domain of deformation about an object. A wire together with a collection of domain curves provide a new basis for an implicit modeling primitive. Wires directly reflect object geometry, and as such they provide a coarse geometric representation of an object that can be created through sketching. Furthermore, the aggregate deformation from several wires is easy to define. We show that a single wire is an appealing direct manipulation deformation technique; we demonstrate that the combination of wires and domain curves provide a new way to outline the shape of an implicit volume in space; and we describe techniques for the aggregation of deformations resulting from multiple wires, domain curves and their interaction with each other and other deformation techniques. The power of our approach is illustrated using applications of animating figures with flexible articulations, modeling wrinkled surfaces and stitching geometry together.", "sha": "3aa5911107dc7b67d15ac7d6b960f5d472b00bab", "corpus_id": 7429206 }
{ "doc_id": "5689149", "title": "Building bridges between psychological science and education : Cultural stereotypes , STEM , and equity", "abstract": "There is a gender gap in science, technology, engineering, and mathematics (STEM) education. This presents a worldwide problem of inequity. Sociocultural stereotypes associating STEM with males act as barriers that prevent girls from developing interests in STEM. This article aims to show that we can increase equity and enhance outcomes for a broader number of children around the world by integrating psychological and educational science. The article discusses four strands of research in an effort to build a bridge between psychological science and educational practice and policy. First, it describes how stereotypes can act as barriers that prevent girls from developing interests in STEM. Second, it summarizes psychological experiments demonstrating that counteracting stereotypes can increase girls’ interest in enrolling in STEM courses. Third, it examines new results showing that children adopt the pervasive stereotypes of their culture starting from surprisingly young ages, and it shows that children’s stereotypes influence their academic attitudes and performance. Fourth, it describes innovative practical interventions that can increase and equalize motivation and engagement in STEM for both boys and girls. In each of these sections, the authors link scientific findings with educational applications. Cultural stereotypes contribute to educational inequities, but scientists, educators, and policymakers can together make a difference to reduce stereotyping and boost girls’ interest in STEM worldwide.", "sha": "ee17ff2ecccbd5d4aef373ebf8d6695fd46f6b09", "corpus_id": 5689149 }
{ "doc_id": "7935579", "title": "Do Female and Male Role Models Who Embody STEM Stereotypes Hinder Women ’ s Anticipated Success in STEM ?", "abstract": "Women who have not yet entered science, technology, engineering, and mathematics (STEM) fields underestimate how well they will perform in those fields (e.g., Correll, 2001; Meece, Parsons, Kaczala, & Goff, 1982). It is commonly assumed that female role models improve women’s beliefs that they can be successful in STEM. The current work tests this assumption. Two experiments varied role model gender and whether role models embody computer science stereotypes. Role model gender had no effect on success beliefs. However, women who interacted with nonstereotypical role models believed they would be more successful in computer science than those who interacted with stereotypical role models. Differences in women’s success beliefs were mediated by their perceived dissimilarity from stereotypical role models. When attempting to convey to women that they can be successful in STEM fields, role model gender may be less important than the extent to which role models embody current STEM stereotypes.", "sha": "68486772c034cef9d08534dc8cea2750fff3d5f9", "corpus_id": 7935579 }
{ "doc_id": "17462929", "title": "How to interpret the helpfulness of online product reviews: bridging the needs between customers and designers", "abstract": "Helpful reviews are the valuable voice of the customer which benefit both consumers and product designers. On e-commerce websites, consumers are usually encouraged to rate whether a review is helpful or not. As consumers are not obligated to vote reviews, usually only a small proportion of product reviews eventually receive a voting. Also, existing evaluation methods that only use the review voting ratio from customers as the helpfulness are often not consistent with the designers' rating on reviews in interpreting customer needs and preferences. Thus, in this paper, the focus is on how to automatically build the connection between online customer's voting and designer's rating and predict the customer reviews' helpfulness based on the review content. We start the study by building a mapping to express product designers' rating using online helpfulness voting. Further, we propose to utilize regression algorithm to predict the online review's helpfulness with the help of several categories of features extracted from review content. Our experimental study, using a large amount of review data crawled from Amazon and real ratings from product designers confirms the effectiveness of our proposal and shows some very promising results.", "sha": "657f942d416cc69fe6ef3715d58fbaa59bd12c10", "corpus_id": 17462929 }
{ "doc_id": "20738340", "title": "Customized 3D Clothes Modeling for Virtual Try-on System based on Multiple Kinects", "abstract": ".................................................................................................................. ii Acknowledgements ............................................................................................... iii Table of", "sha": "bfc2b2528e34c73aaca7285bcef61354aa48df88", "corpus_id": 20738340 }
{ "doc_id": "5961102", "title": "Scanning 3D Full Human Bodies Using Kinects", "abstract": "Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home-oriented virtual reality (VR) applications.", "sha": "619911778e0c6e8af2e56bab893e4a3613509317", "corpus_id": 5961102 }
{ "doc_id": "1980978", "title": "Laplacian Surface Editing", "abstract": "Surface editing operations commonly require geometric details of the surface to be preserved as much as possible. We argue that geometric detail is an intrinsic property of a surface and that, consequently, surface editing is best performed by operating over an intrinsic surface representation. We provide such a representation of a surface, based on the Laplacian of the mesh, by encoding each vertex relative to its neighborhood. The Laplacian of the mesh is enhanced to be invariant to locally linearized rigid transformations and scaling. Based on this Laplacian representation, we develop useful editing operations: interactive free-form deformation in a region of interest based on the transformation of a handle, transfer and mixing of geometric details between two surfaces, and transplanting of a partial surface mesh onto another surface. The main computation involved in all operations is the solution of a sparse linear system, which can be done at interactive rates. We demonstrate the effectiveness of our approach in several examples, showing that the editing operations change the shape while respecting the structural geometric detail.", "sha": "345c42c866405986cff6e5d3cee9fb7ce95a74af", "corpus_id": 1980978 }
{ "doc_id": "14712064", "title": "Automatic classification of movie trailers using data mining techniques: A review", "abstract": "Nowadays huge amount of data is generated through various activities. To manage the data in proper format and the way it is to be maintained. There are so many ways to maintain the data. Classification of data is done so as to identify and put that data in different classes. It is nothing but to group that data in different classes and retrieve as and when it is required with ease. The classification of data is to make the user aware the class of that data on the basis of certain attributes. In this paper we basically focus on the movie trailers using different data mining approaches to classify the videos. The amount of video that a viewer has to choose from is now so large that it is infeasible for a human to go through it all to find movie of interest. Here various methods and techniques are discussed for classification of video and movie trailers.", "sha": "2697abe4fd3e84a7f5d49dd74244f486f045887a", "corpus_id": 14712064 }
{ "doc_id": "1654266", "title": "Scalable Recognition with a Vocabulary Tree", "abstract": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "sha": "b391878646123f5490ef2e2103de09a0947e4dc9", "corpus_id": 1654266 }
{ "doc_id": "28791109", "title": "Automatic partitioning of full-motion video", "abstract": "Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.", "sha": "3731b5dbff5170b1e211960db6a2c2411d602de4", "corpus_id": 28791109 }
{ "doc_id": "3113284", "title": "Sampling Generative Networks: Notes on a Few Effective Techniques", "abstract": "We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model’s prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks. Keywords—Generative; VAE; GAN; Sampling; Manifold", "sha": "3b4d901574162750af343c3d564ed4f514273c67", "corpus_id": 3113284 }
{ "doc_id": "1836951", "title": "Deep Visual Analogy-Making", "abstract": "In addition to identifying the content within a single image, relating images and generating related images are critical tasks for image understanding. Recently, deep convolutional networks have yielded breakthroughs in predicting image labels, annotations and captions, but have only just begun to be used for generating high-quality images. In this paper we develop a novel deep network trained end-to-end to perform visual analogy making, which is the task of transforming a query image according to an example pair of related images. Solving this problem requires both accurately recognizing a visual relationship and generating a transformed query image accordingly. Inspired by recent advances in language modeling, we propose to solve visual analogies by learning to map images to a neural embedding in which analogical reasoning is simple, such as by vector subtraction and addition. In experiments, our model effectively models visual analogies on several datasets: 2D shapes, animated video game sprites, and 3D car models.", "sha": "23b696300a80c0479d571dadc20e50846e80b82c", "corpus_id": 1836951 }
{ "doc_id": "53821397", "title": "Neural Probabilistic Language Models", "abstract": null, "sha": "3621bc359003e36707733650cccadf4333683293", "corpus_id": 53821397 }
{ "doc_id": "44063479", "title": "Local Rule-Based Explanations of Black Box Decision Systems", "abstract": "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.", "sha": "fd44d398b2945b4c20da8ec3cc32becd5e08100e", "corpus_id": 44063479 }
{ "doc_id": "18903675", "title": "EU regulations on algorithmic decision-making and a \"right to explanation\"", "abstract": "We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination.", "sha": "f05155c4d7d77f32855b80a86bb987818838d50d", "corpus_id": 18903675 }
{ "doc_id": "13805864", "title": "Algorithms for Large, Sparse Network Alignment Problems", "abstract": "We propose a new distributed algorithm for sparse variants of the network alignment problem, which occurs in a variety of data mining areas including systems biology, database matching, and computer vision. Our algorithm uses a belief propagation heuristic and provides near optimal solutions for this NP-hard combinatorial optimization problem. We show that our algorithm is faster and outperforms or ties existing algorithms on synthetic problems, a problem in bioinformatics, and a problem in ontology matching. We also provide a unified framework for studying and comparing all network alignment solvers.", "sha": "c73a1ee22c605341cd1218853d4680f2a879229b", "corpus_id": 13805864 }
{ "doc_id": "9255093", "title": "Fudan-Huawei at MediaEval 2015: Detecting Violent Scenes and Affective Impact in Movies with Deep Learning", "abstract": "Techniques for violent scene detection and affective impact prediction in videos can be deployed in many applications. In MediaEval 2015, we explore deep learning methods to tackle this challenging problem. Our system consists of several deep learning features. First, we train a Convolutional Neural Network (CNN) model with a subset of ImageNet classes selected particularly for violence detection. Second, we adopt a specially designed two-stream CNN framework [1] to extract features on both static frames and motion optical flows. Third, Long Short Term Memory (LSTM) models are applied on top of the two-stream CNN features, which can capture the longer-term temporal dynamics. In addition, several conventional motion and audio features are also extracted as complementary information to the deep learning features. By fusing all the advanced features, we achieve a mean average precision of 0.296 in the violence detection subtask, and an accuracy of 0.418 and 0.488 for arousal and valence respectively in the induced affect detection subtask. 1. SYSTEM DESCRIPTION Figure 1 gives an overview of our system. In this short paper, we briefly describe each of the key components. For more information about the task definitions, interested readers may refer to [2].", "sha": "6261eb75066f779e75b02209fbd3d0f02d3e1e45", "corpus_id": 9255093 }
{ "doc_id": "195908774", "title": "ImageNet Classification with Deep Convolutional Neural Networks", "abstract": "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.", "sha": "2315fc6c2c0c4abd2443e26a26e7bb86df8e24cc", "corpus_id": 195908774 }
{ "doc_id": "1610874", "title": "Dense prediction for text line segmentation in handwritten document images", "abstract": "We present a novel approach to segment text lines from handwritten document images. In contrast to existing approaches which mainly use hand-designed features or heuristic rules to estimate the location of text lines, we train a fully convolutional network (FCN) to predict text line structure in document images. By using the FCN, a line map which is a rough estimation of text line is obtained. From this line map, text strings that pass through characters in each text line are constructed. To deal with touching text lines, line adjacency graph (LAG) is used to separate the touching characters into different text strings. The testing result on ICDAR2013 Handwritten Segmentation Contest dataset shows high performance together with the robustness of our system with different types of languages and multi-skewed text lines.", "sha": "7edcbe962d1fde94715ef2131d2e73c72f453a78", "corpus_id": 1610874 }
{ "doc_id": "52822000", "title": "Path Planning and Controlled Crash Landing of a Quadcopter in case of a Rotor Failure", "abstract": "This paper presents a framework for controlled emergency landing of a quadcopter, experiencing a rotor failure, away from sensitive areas. A complete mathematical model capturing the dynamics of the system is presented that takes the asymmetrical aerodynamic load on the propellers into account. An equilibrium state of the system is calculated around which a linear time-invariant control strategy is developed to stabilize the system. By utilizing the proposed model, a specific configuration for a quadcopter is introduced that leads to the minimum power consumption during a yaw-rateresolved hovering after a rotor failure. Furthermore, given a 3D representation of the environment, an optimal flight trajectory towards a safe crash landing spot, while avoiding collision with obstacles, is developed using an RRT* approach. The cost function for determining the best landing spot consists of: (i) finding the safest landing spot with the largest clearance from the obstacles; and (ii) finding the most energy-efficient trajectory towards the landing spot. The performance of the proposed framework is tested via simulations.", "sha": "e9001ea4368b808da7cabda7edd2fbd4118a3f6b", "corpus_id": 52822000 }
{ "doc_id": "17397594", "title": "An Algorithm for Planning Collision-Free Paths Among Polyhedral Obstacles", "abstract": "This paper describes a collision avoidance algorithm for planning a safe path for a polyhedral object moving among known polyhedral objects. The algorithm transforms the obstacles so that they represent the locus of forbidden positions for an arbitrary reference point on the moving object. A trajectory of this reference point which avoids all forbidden regions is free of collisions. Trajectories are found by searching a network which indicates, for each vertex in the transformed obstacles, which other vertices can be reached safely.", "sha": "59d745bd21d29dd77c21221bb822dcd3a9431095", "corpus_id": 17397594 }
{ "doc_id": "15771007", "title": "Vision-Controlled Micro Flying Robots: From System Design to Autonomous Navigation and Mapping in GPS-Denied Environments", "abstract": "Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.", "sha": "625e2f3229d75dc6ec346961efe485617dd3e048", "corpus_id": 15771007 }
{ "doc_id": "988010", "title": "Opinosis: A Graph Based Approach to Abstractive Summarization of Highly Redundant Opinions", "abstract": "We present a novel graph-based summarization framework (Opinosis) that generates concise abstractive summaries of highly redundant opinions. Evaluation results on summarizing user reviews show that Opinosis summaries have better agreement with human summaries compared to the baseline extractive method. The summaries are readable, reasonably well-formed and are informative enough to convey the major opinions.", "sha": "129c96e900c5b0f4d41f449a89305d9cf91a1d1c", "corpus_id": 988010 }
{ "doc_id": "1320", "title": "Centroid-based summarization of multiple documents: sentence extraction utility-based evaluation, and user studies", "abstract": "We present a multi-document summarizer, called MEAD, which generates summaries using cluster centroids produced by a topic detection and tracking system. We also describe two new techniques, based on sentence utility and subsumption, which we have applied to the evaluation of both single and multiple document summaries. Finally, we describe two user studies that test our models of multi-document summarization.", "sha": "1bad3e9f15df77f06ae449bba17f9e85a3bb9187", "corpus_id": 1320 }
{ "doc_id": "254629", "title": "Evolutionary dynamics on any population structure", "abstract": "Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times of random walks. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small changes in population structure—graph surgery—affect evolutionary outcomes. We find that cooperation flourishes most in societies that are based on strong pairwise ties.", "sha": "4da77106472d876b1da560b840dd5a5dce6739f8", "corpus_id": 254629 }
{ "doc_id": "1453652", "title": "Conspicuous Consumption versus Charitable Behavior in Response to Social Exclusion : A Differential Needs Explanation", "abstract": "Social exclusion has been shown to produce a number of different responses. This research examines the proposition that social exclusion may produce either selffocused or prosocial responses, depending on which needs are threatened. Different types of social exclusion threaten different needs, which in turn produce distinct outcomes (differential needs hypothesis). Social exclusion in the form of being implicitly ignored increased conspicuous consumption, whereas being explicitly rejected increased helping and donation behavior. However, when efficacy needs (power, meaningful existence) were bolstered, the effects of being ignored were eliminated, whereas when relational needs (self-esteem) were bolstered, the effects of being rejected were eliminated. The results indicate that certain types of social exclusion produce prosocial responses, whereas others produce selffocused and attention-getting responses.", "sha": "3896c29882bc420314ea771e968d7de80d04c49b", "corpus_id": 1453652 }
{ "doc_id": "11020243", "title": "Social Exclusion Causes People to Spend and Consume Strategically in the Service of Affiliation", "abstract": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "sha": "7025ac8a20950ae1ead58f810a40881c38c7021f", "corpus_id": 11020243 }
{ "doc_id": "15668476", "title": "Mediation in experimental and nonexperimental studies: new procedures and recommendations.", "abstract": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "sha": "bc9752afc171d24fddc69f4592847083ac28a459", "corpus_id": 15668476 }
{ "doc_id": "683438", "title": "DeepCancer: Detecting Cancer through Gene Expressions via Deep Generative Learning", "abstract": "Transcriptional profiling on microarrays to obtain gene expressions has been used to facilitate cancer diagnosis. We propose a deep generative machine learning architecture (called DeepCancer) that learn features from unlabeled microarray data. These models have been used in conjunction with conventional classifiers that perform classification of the tissue samples as either being cancerous or non-cancerous. The proposed model has been tested on two different clinical datasets. The evaluation demonstrates that DeepCancer model achieves a very high precision score, while significantly controlling the false positive and false negative scores.", "sha": "4343be735026dd1ce43c51c308a7fd48de5d3d8a", "corpus_id": 683438 }
{ "doc_id": "15315839", "title": "Machine learning applications in cancer prognosis and prediction", "abstract": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.", "sha": "95274ca3be569765960464d24f898c6fe025bac9", "corpus_id": 15315839 }
{ "doc_id": "6844431", "title": "Dropout: a simple way to prevent neural networks from overfitting", "abstract": "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different “thinned” networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "sha": "6c8b30f63f265c32e26d999aa1fef5286b8308ad", "corpus_id": 6844431 }
{ "doc_id": "20101754", "title": "Principal component analysis: a review and recent developments.", "abstract": "Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.", "sha": "db809a74abe860cc629818a11d3f86cc42e84d06", "corpus_id": 20101754 }
{ "doc_id": "5490061", "title": "A direct formulation for sparse PCA using semidefinite programming", "abstract": "Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into sparse factors or sparse PCA, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming based relaxation for our problem. We also discuss Nesterov’s smooth minimization technique applied to the semidefinite program arising in the semidefinite relaxation of the sparse PCA problem. The method has complexity O(n √ log(n)/ǫ), where n is the size of the underlying covariance matrix, and ǫ is the desired absolute accuracy on the optimal value of the problem.", "sha": "14e10498f4f12e18aaf4706969746b3cfb36132f", "corpus_id": 5490061 }
{ "doc_id": "2489298", "title": "On numerical methods for discrete least-squares approximation by trigonometric polynomials", "abstract": "Fast, efficient and reliable algorithms for discrete least-squares approximation of a real-valued function given at arbitrary distinct nodes in [0, 2π) by trigonometric polynomials are presented. The algorithms are based on schemes for the solution of inverse unitary eigenproblems and require only O(mn) arithmetic operations as compared to O(mn2) operations needed for algorithms that ignore the structure of the problem. An algorithm which solves this problem with real-valued data and real-valued solution using only real arithmetic is given. Numerical examples are presented that show that the proposed algorithms produce consistently accurate results that are often better than those obtained by general QR decomposition methods for the least-squares problem.", "sha": "26ec134b1e0cc8ef3b13de2bac6617cdfd9baf9e", "corpus_id": 2489298 }
{ "doc_id": "207195371", "title": "Sentiment analysis: what is the end user's requirement?", "abstract": "In this paper we address the Sentiment Analysis problem from the end user's perspective. An end user might desire an automated at-a-glance presentation of the main points made in a single review or how opinion changes time to time over multiple documents. To meet the requirement we propose a relatively generic opinion 5Ws structurization, further used for textual and visual summary and tracking. The 5W task seeks to extract the semantic constituents in a natural language sentence by distilling it into the answers to the 5W questions: Who, What, When, Where and Why. The visualization system facilitates users to generate sentiment tracking with textual summary and sentiment polarity wise graph based on any dimension or combination of dimensions as they want i.e. \"Who\" are the actors and \"What\" are their sentiment regarding any topic, changes in sentiment during \"When\" and \"Where\" and the reasons for change in sentiment as \"Why\".", "sha": "25af05944a576f43fc88576ca93c88623db384b1", "corpus_id": 207195371 }
{ "doc_id": "31468174", "title": "Graph Drawing by Force-directed Placement", "abstract": "A graph G = (V,E) is a set V of vertices and a set E of edges, in which an edge joins a pair of vertices. 1 Normally, graphs are depicted with their vertices as points in a plane and their edges as line or curve segments connecting those points. There are different styles of representation, suited to different types of graphs or different purposes of presentation. We concentrate on the most general class of graphs: undirected graphs, drawn with straight edges. In this paper, we introduce an algorithm that attempts to produce aesthetically-pleasing, two-dimensional pictures of graphs by doing simplified simulations of physical systems. We are concerned with drawing undirected graphs according to some generally accepted aesthetic criteria: 2", "sha": "5ce64de5b87da6365b7a718d3bfdae62f3930286", "corpus_id": 31468174 }
{ "doc_id": "19429476", "title": "Object-Part Attention Model for Fine-Grained Image Classification", "abstract": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.", "sha": "67b9b6db06fa91145bed02438aab8773cc029f1c", "corpus_id": 19429476 }
{ "doc_id": "17229714", "title": "Speech Recognition Software and Vidispine", "abstract": "To evaluate libraries for continuous speech recognition, a test based on TED-talk videos was created. The different speech recognition libraries PocketSphinx, Dragon NaturallySpeaking and Microsoft Speech API were part of the evaluation. From the words that the libraries recognized, Word Error Rate (WER) was calculated and the results show that Microsoft SAPI performed worst with a WER of 60.8%, PocketSphinx at second place with 59.9% and Dragon NaturallySpeaking as the best with 42.6%. These results were all achieved with a Real Time Factor (RTF) of less than 1.0. PocketSphinx was chosen as the best candidate for the intended system on the basis that it is open-source, free and would be a better match to the system. By modifying the language model and dictionary to closer resemble typical TED-talk contents, it was also possible to improve the WER for PocketSphinx to a value of 39.5%, however with the cost of RTF which passed the 1.0 limit, making it less useful for live video.", "sha": "13a067fa5f5780e4651e8d455b7ad96b4075d9a1", "corpus_id": 17229714 }
{ "doc_id": "5059986", "title": "Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription", "abstract": "Deploying an automatic speech recognition system with reasonable performance requires expensive and time-consuming in-domain transcription. Previous work demonstrated that non-professional annotation through Amazon’s Mechanical Turk can match professional quality. We use Mechanical Turk to transcribe conversational speech for as little as one thirtieth the cost of professional transcription. The higher disagreement of nonprofessional transcribers does not have a significant effect on system performance. While previous work demonstrated that redundant transcription can improve data quality, we found that resources are better spent collecting more data. Finally, we suggest a concrete method for quality control without needing professional transcription.", "sha": "1e622ea4e3958b010bc2e767443b50986480890a", "corpus_id": 5059986 }
{ "doc_id": "7448250", "title": "Triangle Generative Adversarial Networks", "abstract": "A Triangle Generative Adversarial Network (∆-GAN) is developed for semisupervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. ∆-GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach.", "sha": "27c4369463ff28f4ab16e9d9eba6f48102c8793e", "corpus_id": 7448250 }
{ "doc_id": "14752149", "title": "The Good, the Bad, and the Ugly: An Empirical Study of Implicit Type Conversions in JavaScript", "abstract": "Most popular programming languages support situations where a value of one type is converted into a value of another type without any explicit cast. Such implicit type conversions, or type coercions, are a highly controversial language feature. Proponents argue that type coercions enable writing concise code. Opponents argue that type coercions are error-prone and that they reduce the understandability of programs. This paper studies the use of type coercions in JavaScript, a language notorious for its widespread use of coercions. We dynamically analyze hundreds of programs, including real-world web applications and popular benchmark programs. We find that coercions are widely used (in 80.42% of all function executions) and that most coercions are likely to be harmless (98.85%). Furthermore, we identify a set of rarely occurring and potentially harmful coercions that safer subsets of JavaScript or future language designs may want to disallow. Our results suggest that type coercions are significantly less evil than commonly assumed and that analyses targeted at real-world JavaScript programs must consider coercions. 1998 ACM Subject Classification D.3.3 Language Constructs and Features, F.3.2 Semantics of Programming Languages, D.2.8 Metrics", "sha": "2d0ef5f0ce563bc494a8e4552840e7fa342f43c5", "corpus_id": 14752149 }
{ "doc_id": "4010276", "title": "An Empirical Study of Client-Side JavaScript Bugs", "abstract": "Context: Client-side JavaScript is widely used in web applications to improve user-interactivity and minimize client-server communications. Unfortunately, web applications are prone to JavaScript faults. While prior studies have demonstrated the prevalence of these faults, no attempts have been made to determine their root causes and consequences. Objective: The goal of our study is to understand the root causes and impact of JavaScript faults and how the results can impact JavaScript programmers, testers and tool developers. Method: We perform an empirical study of 317 bug reports from 12 bug repositories. The bug reports are thoroughly examined to classify and extract information about the fault's cause (the error) and consequence (the failure and impact). Result: The majority (65%) of JavaScript faults are DOM-related, meaning they are caused by faulty interactions of the JavaScript code with the Document Object Model (DOM). Further, 80% of the highest impact JavaScript faults are DOM-related. Finally, most JavaScript faults originate from programmer mistakes committed in the JavaScript code itself, as opposed to other web application components such as the server-side or HTML code. Conclusion: Given the prevalence of DOM-related faults, JavaScript programmers need development tools that can help them reason about the DOM. Also, testers should prioritize detection of DOM-related faults as most high impact faults belong to this category. Finally, developers can use the error patterns we found to design more powerful static analysis tools for JavaScript.", "sha": "cf1ff0d171712cc2ed2c40a7bb10e52caf991e78", "corpus_id": 4010276 }
{ "doc_id": "49337880", "title": "IoT based traffic light control system using Raspberry Pi", "abstract": "Congestion in traffic is a serious issue. In existing system signal timings are fixed and they are independent of traffic density. Large red light delays leads to traffic congestion. In this paper, IoT based traffic control system is implemented in which signal timings are updated based on the vehicle counting. This system consists of WI-FI transceiver module it transmits the vehicle count of the current system to the next traffic signal. Based on traffic density of previous signal it controls the signals of the next signal. The system is based on raspberry-pi and Arduino. Image processing of traffic video is done in MATLAB with simulink support. The whole vehicle counting is performed by raspberry pi.", "sha": "34f69fead811c0b014ce349d4fea1dd78a9c8bfc", "corpus_id": 49337880 }
{ "doc_id": "2565229", "title": "Exploiting Eigenposteriors for Semi-Supervised Training of DNN Acoustic Models with Sequence Discrimination", "abstract": "Deep neural network (DNN) acoustic models yield posterior probabilities of senone classes. Recent studies support the existence of low-dimensional subspaces underlying senone posteriors. Principal component analysis (PCA) is applied to identify eigenposteriors and perform low-dimensional projection of the training data posteriors. The resulted enhanced posteriors are applied as soft targets for training better DNN acoustic model under the student-teacher framework. The present work advances this approach by studying incorporation of sequence discriminative training. We demonstrate how to combine the gains from eigenposterior based enhancement with sequence discrimination to improve ASR using semi-supervised training. Evaluation on AMI meeting corpus yields nearly 4% absolute reduction in word error rate (WER) compared to the baseline DNN trained with cross entropy objective. In this context, eigenposterior enhancement of the soft targets is crucial to enable additive improvement using out-of-domain untranscribed data.", "sha": "483e8e8d19a56405d303cd3dc4b1dc16d1e9fa90", "corpus_id": 2565229 }
{ "doc_id": "3334366", "title": "Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets", "abstract": "While Deep Neural Networks (DNNs) have achieved tremendous success for large vocabulary continuous speech recognition (LVCSR) tasks, training of these networks is slow. One reason is that DNNs are trained with a large number of training parameters (i.e., 10-50 million). Because networks are trained with a large number of output targets to achieve good performance, the majority of these parameters are in the final weight layer. In this paper, we propose a low-rank matrix factorization of the final weight layer. We apply this low-rank technique to DNNs for both acoustic modeling and language modeling. We show on three different LVCSR tasks ranging between 50-400 hrs, that a low-rank factorization reduces the number of parameters of the network by 30-50%. This results in roughly an equivalent reduction in training time, without a significant loss in final recognition accuracy, compared to a full-rank representation.", "sha": "0607acbb450d2afef7f2aa5b53bb05966bd065ed", "corpus_id": 3334366 }
{ "doc_id": "13337313", "title": "Jogging over a distance: the influence of design in parallel exertion games", "abstract": "\"Exertion games\" are gaming interactions with technology in which users invest significant physical effort. They form part of an emerging phenomenon with many physical and social health benefits, and we believe that the social and exertion interactions are intertwined. Recent technological developments, particularly in the sports and game domain, have been proposed to augment these exertion activities. However, we only have a limited understanding of how this relationship between social and exertion aspects can be successfully facilitated by the design, especially in mediated environments. In this paper, we present initial findings of a qualitative study of \"Jogging over a Distance\" that illustrate how technology design can facilitate a social game experience despite participants running in opposite sides of the world. From this study, we derived conceptual themes that offer an analytical and descriptive account of the influence of design on the relationship between exertion and social aspects. Our work aims to support the analysis of existing, and provide guidance for, the design of future games that aim to utilize the many benefits of social exertion.", "sha": "5e9adc8277bf7f59fde4c906a953ba1598145e73", "corpus_id": 13337313 }
{ "doc_id": "9213337", "title": "Design and experimental study of a compliant feet quadruped robot with closed-chain mechanism legs", "abstract": "The locomotion performances of a quadruped robot with compliant feet based on closed-chain mechanism legs are presented. The legs of this quadruped robot were made up of six-bar linkage mechanism with one degree of freedom. And a special foot trajectory could be gained through kinematic analysis and optimum design of the six-bar linkage mechanism. In order to reduce the impact force of quadruped robot's walking on the ground, two semicircle feet with different thickness were designed as compliant feet. The experimental results of this quadruped robot with different stiffness feet showed that the semicircle feet could reduce the driving torque and current of motors. This primary investigation illustrated that the compliant feet could improve the locomotion performance of a quadruped robot based on closed-chain mechanism legs.", "sha": "634d0b87f7368f62dd70e393fda7420a576dccb0", "corpus_id": 9213337 }
{ "doc_id": "206500161", "title": "The LittleDog robot", "abstract": "LittleDog is a small four-legged robot designed for research on legged locomotion. The LittleDog platform was designed by Boston Dynamics with funding from DARPA to enable rapid advances in the state of the art of rough-terrain locomotion algorithms. In addition to providing a fleet of 12 robots with baseline software and development tools, LittleDog served as a cross-team common platform that allowed direct comparison of results across multiple research teams. Here we report the details of this robotic system.", "sha": "e56573750199f221862e6ba03f916f9d27e6843a", "corpus_id": 206500161 }
{ "doc_id": "552519", "title": "Symmetry Descriptors and 3D Shape Matching", "abstract": "In this paper, we present the Symmetry Descriptors of a 3D model. This is a collection of spherical functions that describes the measure of a model's rotational and reflective symmetry with respect to every axis passing through the center of mass. We show that Symmetry Descriptors can be computed efficiently using fast signal processing techniques, and demonstrate the empirical value of Symmetry Descriptors by showing that they improve matching performance in a variety of shape retrieval experiments.", "sha": "1fc5b811ddebe38c585ecac3edb920d1917d7e54", "corpus_id": 552519 }
{ "doc_id": "570653", "title": "On the feasibility of online malware detection with performance counters", "abstract": "The proliferation of computers in any domain is followed by the proliferation of malware in that domain. Systems, including the latest mobile platforms, are laden with viruses, rootkits, spyware, adware and other classes of malware. Despite the existence of anti-virus software, malware threats persist and are growing as there exist a myriad of ways to subvert anti-virus (AV) software. In fact, attackers today exploit bugs in the AV software to break into systems.\n In this paper, we examine the feasibility of building a malware detector in hardware using existing performance counters. We find that data from performance counters can be used to identify malware and that our detection techniques are robust to minor variations in malware programs. As a result, after examining a small set of variations within a family of malware on Android ARM and Intel Linux platforms, we can detect many variations within that family. Further, our proposed hardware modifications allow the malware detector to run securely beneath the system software, thus setting the stage for AV implementations that are simpler and less buggy than software AV. Combined, the robustness and security of hardware AV techniques have the potential to advance state-of-the-art online malware detection.", "sha": "3c1b97dff8f96170f9557319e9d881286aa77c1e", "corpus_id": 570653 }
{ "doc_id": "872430", "title": "Automated Classification and Analysis of Internet Malware", "abstract": "Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus(AV)–a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects—from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.", "sha": "45f22cc9f8593d782ae9d3c1f764ffe01514c0ac", "corpus_id": 872430 }
{ "doc_id": "25905406", "title": "High-Frequency-Link Soft-Switching PWM DC–DC Converter for EV On-Board Battery Chargers", "abstract": "This paper proposes a high-frequency-link soft-switching pulse-width modulated dc-dc converter for electric vehicle (EV) on-board battery chargers. Zero-voltage switching of power switches is achieved from light load to full load. Reverse-recovery losses can be reduced in the secondary side output diodes without using any additional circuit components. Zero-current switching of output diodes is achieved by using the series-resonant circuit in the secondary side. The circulating current in the primary side full-bridge circuit can be changed by the operation modes of the output diode current. As a result, a high efficiency can be achieved for EV on-board battery chargers. The performance of the proposed converter is evaluated throughout experimental results for a 2.0-kW prototype circuit.", "sha": "1626fe2252a8b3a4954639d3f42c025f96cceb77", "corpus_id": 25905406 }
{ "doc_id": "45150299", "title": "Robot Learning from Demonstration : Kinesthetic Teaching vs . Teleoperation", "abstract": "We are interested in developing learning from demonstration systems that are suitable to be used by everyday people. We compare two interaction methods, kinesthetic teaching and teleoperation, for the users to show successful demonstrations of a skill. In the former, the user physically guides the robot and in the latter the user controls the robot with a haptic device. We evaluate our results using skill dependent quantitative measures, timing information and survey questions. We find that kinesthetic teaching is faster in terms of giving a single demonstration and the demonstrations are more successful. However, the learned skill does not perform better as expected. The survey results show that users think kinesthetic teaching is easier and more accurate and an open-ended question suggests that people would prefer kinesthetic teaching over teleoperation for everyday skills.", "sha": "cc58c920e2988331e8509e6d7c4279cc6ea85c1d", "corpus_id": 45150299 }
{ "doc_id": "1045325", "title": "A survey of robot learning from demonstration", "abstract": "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research.", "sha": "248040fa359a9f18527e28687822cf67d6adaf16", "corpus_id": 1045325 }
{ "doc_id": "10312742", "title": "An Overview of Virtual and Augmented Reality in Dental Education", "abstract": "Introduction: Clinical dentistry is a complex area for education. This is because the development of clinical competence requires the assimilation of knowledge combined with the acquisition of clinical skills and problem-solving ability. In recent years, a variety of computer-based modalities—including intelligent tutoring systems, medical simulation, and virtual reality techniques and the development of Web 2.0 collaborative authoring and social networking tools—has become available. Aims: The aims of this paper are to provide an overview of the use of one of these modalities—virtual and augmented reality systems—in dental education and to discuss the strengths and weaknesses of these systems. Methods:A literature review was performed, using the search terms “virtual reality” and “dental education”. Secondary literature searching was then performed. Results: The review suggested that the use of virtual and augmented reality technologies offers the advantages of the reinforcement of theoretical dental knowledge, correct use of dental instruments, ergonomic positioning, students’ self-evaluation, faster acquisition of skills and positive student perception. In general, any disadvantages arise because most of the dental simulators that use virtual and augmented reality are in an early/experimental stage. Conclusions: It can be concluded that virtual and augmented reality systems will play an increasing role in dental education. These technologies are likely to change clinical training and encourage the use of reflective forms of assessment, which involve students in a self-assessment process to identify individual learning needs and self-directed learning. These innovations promise not only lower costs of the educational process, but also an increase in quality by providing a new set of pedagogical tools for dental schools.", "sha": "0ca2e4b2aa03f327ec58f496e56116ed27bfedc9", "corpus_id": 10312742 }
{ "doc_id": "196115634", "title": "Characterization of Respiratory Conditions Using Labview and Digital Spirometer", "abstract": "Received Jan 6, 2018 Revised Mar 8, 2018 Accepted Mar 23, 2018 One of the effective ways to diagnose various respiratory diseases is using spirometry test. Good spirometer comes with excellent graphical user interface. Spirometer is used to measure lung parameters such as Forced Expiratory Volume in the first second and the sixth seconds (FEV1 and FEV6). This paper presents an algorithm with Graphical User Interface (GUI) for characterization of respiratory conditions using LabVIEW Software. The whole spirometry system consists of a breathing circuitry with pressure sensor and a data acquisition board (NI sbRIO FPGA board). Results obtained from three different volunteers with different health performances are also presented in this paper. The FEV1/FEV6 ratio of a healthy volunteer is 81.1%, an asthma volunteer is 72.04%, and suspected bronchitis volunteer is 33.4%. Based on these results, the unhealthy volunteers tend to have smaller value of FEV1 with lower area under the curve when compared to healthy volunteer.", "sha": "99435c44803ba38c6d4f964a32b8757cf79ed55e", "corpus_id": 196115634 }
{ "doc_id": "31622092", "title": "A Differential Pressure Approach to Spirometry", "abstract": "This paper presents the design, implementation and measurements of a spirometer based on differential pressure sensing. A system which fulfills the last medical standard specifications has been designed exploiting the Venturi tube principle. A fully operating prototype has been tested, and data have been subsequently analyzed. Air flows up to 14 L/s can be measured with an accuracy of 0.2 L/s. Data acquisition software and user interface were developed.", "sha": "ae191a3356120cf4fd0fd0eec349de73f9e00fc1", "corpus_id": 31622092 }
{ "doc_id": "15951101", "title": "Digital Spirometer with LabView Interface", "abstract": "In the present document, the design and the construction of a low cost digital spirometer is presented like a basic tool for evaluate of the respiratory capacity. A Spirometer is the ideal instrument to measure the mechanics ventilatoria in respiratory studies that allow the analysis, determination and monitoring of diverse human respiratory diseases like asthma, pulmonary embolism, respiratory bronchitis and others deficiencies. The spirometer developed is an instrument portable, of easy handling and computerized that fulfills with the American norms given by the American Thorax Society (ATS) and the requirements of the Mexican population. By means of digital electronics and a pneumotachometer, the virtual instrument acquires and the respiratory flow using a NI USB interface. The volume-time and flow-volume graphs are displayed means a virtual instrument developed in Labview. The spirometrics test includes parameters of the mechanics ventilatoria like the FEV1 (Forced Expiratory Volume in the first second), FVC (Forced Vital Capacity), and MEF (Maximum Expiratory Flow).", "sha": "aaf1b090de193adf480a2a9e5d724140d59b1069", "corpus_id": 15951101 }
{ "doc_id": "4332249", "title": "Modeling the Correlations of Relations for Knowledge Graph Embedding", "abstract": "Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into the vector space separately and the intrinsic correlations of these relations are ignored. It is obvious that there exist some correlations among relations because different relations may connect to a common entity. For example, the triples (Steve Jobs, PlaceOfBrith, California) and (Apple Inc., Location, California) share the same entity California as their tail entity. We analyze the embedded relation matrices learned by TransE/TransH/TransR, and find that the correlations of relations do exist and they are showed as low-rank structure over the embedded relation matrix. It is natural to ask whether we can leverage these correlations to learn better embeddings for the entities and relations in a knowledge graph. In this paper, we propose to learn the embedded relation matrix by decomposing it as a product of two low-dimensional matrices, for characterizing the low-rank structure. The proposed method, called TransCoRe (Translation-Based Method via Modeling the Correlations of Relations), learns the embeddings of entities and relations with translation-based framework. Experimental results based on the benchmark datasets of WordNet and Freebase demonstrate that our method outperforms the typical baselines on link prediction and triple classification tasks.", "sha": "d6508e8825a6a1281ad415de47a2f108d98df87d", "corpus_id": 4332249 }
{ "doc_id": "11033817", "title": "Multi-Relational Latent Semantic Analysis", "abstract": "We present Multi-Relational Latent Semantic Analysis (MRLSA) which generalizes Latent Semantic Analysis (LSA). MRLSA provides an elegant approach to combining multiple relations between words by constructing a 3-way tensor. Similar to LSA, a lowrank approximation of the tensor is derived using a tensor decomposition. Each word in the vocabulary is thus represented by a vector in the latent semantic space and each relation is captured by a latent square matrix. The degree of two words having a specific relation can then be measured through simple linear algebraic operations. We demonstrate that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves stateof-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.", "sha": "27b70cd1e5322d0c60b7d68803fcdaf132114f39", "corpus_id": 11033817 }
{ "doc_id": "6968353", "title": "Solving job-shop scheduling problems by genetic algorithm", "abstract": "Job-shop Scheduling Problem (JSP) is one of extremely hard problems because it requires very large combinatorial search space and the precedence constraint between machines. The traditional algorithm used to solve the problem is the branch-andbound method, which takes considerable computing time when the size of problem is large. W e propose a new method for solving JSP using Genetic Algorithm (GA) and demonstrate its efficiency by the standard benchmark of job-shop scheduling problems. Some important points of G A are how t o represent the schedules as an individuals and to design the genetic operators for the representation in order t o produce better results.", "sha": "d504a72e40ecee5c2e721629e7368a959b18c681", "corpus_id": 6968353 }
{ "doc_id": "23945941", "title": "Impact of Online Customer Reviews and Incentives on the Product Sales at the Online Retail Store : An Empirical Study on Video Game Titles at Amazon . com Completed Research Paper", "abstract": "This study investigates the impact of the customer reviews and the incentives on the product sales at the online retail store. Two customer review factors (i.e. average review ratings and the number of reviews) and two customer incentives factors (i.e. price discounts and special shipping offers) are used for the regression analysis. With the sales ranking data collected from the video game titles at Amazon.com, this study analyzes both the direct effects of the four factors and the interaction effects between the customer review factors and the incentives factors towards the product sales. The analysis result reveals the relationships mostly exist as hypothesized. The findings support both the direct and interaction effects of the customer reviews and the incentive factors on the product sales. Based upon the findings, discussions are provided with regard to the academic and practical contributions.", "sha": "13208546a0b3d8cdfc5673369193fabc912d85ea", "corpus_id": 23945941 }
{ "doc_id": "42979768", "title": "What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon.com", "abstract": "Customer reviews are increasingly available online for a wide range of products and services. They supplement other information provided by electronic storefronts such as product descriptions, reviews from experts, and personalized advice generated by automated recommendation systems. While researchers have demonstrated the benefits of the presence of customer reviews to an online retailer, a largely uninvestigated issue is what makes customer reviews helpful Carol Saunders was the accepting senior editor for this paper. Both authors contributed equally to this paper. to a consumer in the process of making a purchase decision. Drawing on the paradigm of search and experience goods from information economics, we develop and test a model of customer review helpfulness. An analysis of 1,587 reviews from Amazon.com across six products indicated that review extremity, review depth, and product type affect the perceived helpfulness of the review. Product type moderates the effect of review extremity on the helpfulness of the review. For experience goods, reviews with extreme ratings are less helpful than reviews with moderate ratings. For both product types, review depth has a positive effect on the helpfulness of the review, but the product type moderates the effect of review depth on the helpfulness of the review. Review depth has a greater positive effect on the helpfulness of the review for search goods than for experience goods. We discuss the implications of our findings for both theory and practice.", "sha": "eddb53ec2f7723fe205d56bed1f284f8174ba7e2", "corpus_id": 42979768 }
{ "doc_id": "25427927", "title": "Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network", "abstract": "Recently, Generative Adversarial Network (GAN) has been found wide applications in style transfer, image-to-image translation and image super-resolution. In this paper, a colordepth conditional GAN is proposed to concurrently resolve the problems of depth super-resolution and color super-resolution in 3D videos. Firstly, given the low-resolution depth image and low-resolution color image, a generative network is proposed to leverage mutual information of color image and depth image to enhance each other in consideration of the geometry structural dependency of color-depth image in the same scene. Secondly, three loss functions, including data loss, total variation loss, and 8-connected gradient difference loss are introduced to train this generative network in order to keep generated images close to the real ones, in addition to the adversarial loss. Experimental results demonstrate that the proposed approach produces highquality color image and depth image from low-quality image pair, and it is superior to several other leading methods. Besides, the applications of the proposed method in other tasks are image smoothing and edge detection at the same time.", "sha": "f7e9b7830f0a955da5ee680702759a2e0da0cd73", "corpus_id": 25427927 }
{ "doc_id": "3351169", "title": "Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification", "abstract": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.", "sha": "5c6a0a8d993edf86846ac7c6be335fba244a59f8", "corpus_id": 3351169 }
{ "doc_id": "1849990", "title": "Learning Deep Features for Scene Recognition using Places Database", "abstract": "[1] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 2007. [2] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007. [3] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proc. CVPR, 2006. [4] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In Proc. ICCV, 2007. [5] G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Proc. CVPR, 2012.", "sha": "1d827e24143e5fdfe709d33b7b13a9a24d402efd", "corpus_id": 1849990 }
{ "doc_id": "17150201", "title": "Determining the Camera Response from Images: What Is Knowable?", "abstract": "An image acquired by a camera consists of measured intensity values which are related to scene radiance by a function called the camera response function. Knowledge of this response is necessary for computer vision algorithms which depend on scene radiance. One way the response has been determined is by establishing a mapping of intensity values between images taken with different exposures. We call this mapping the intensity mapping function. In this paper, we address two basic questions. What information from a pair of images taken at different exposures is needed to determine the intensity mapping function? Given this function, can the response of the camera and the exposures of the images be determined? We completely determine the ambiguities associated with the recovery of the response and the ratios of the exposures. We show all methods that have been used to recover the response break these ambiguities by making assumptions on the exposures or on the form of the response. We also show when the ratio of exposures can be recovered directly from the intensity mapping, without recovering the response. We show that the intensity mapping between images is determined solely by the intensity histograms of the images. We describe how this allows determination of the intensity mapping between images without registration. This makes it possible to determine the intensity mapping in sequences with some motion of both the camera and objects in the scene.", "sha": "3cfab079ab6acda8cbda0193fcd53b51a9f73dfe", "corpus_id": 17150201 }
{ "doc_id": "15490115", "title": "Evaluation of workspace of a spherical robotic wrist", "abstract": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.", "sha": "91e1c292a3d956745186eb847ce0d111fe2acc08", "corpus_id": 15490115 }
{ "doc_id": "6420886", "title": "Singularity analysis of closed-loop kinematic chains", "abstract": "Abstmct-This paper presents an analysis of the different kinds of singularities encountered in closed-loop kinematic chains. A general classification of these singularities in three main groups, which is based on the properties of the Jacobian matrices of the chain, is described. The identification of the singular configurations is particularly relevant for hard automation modules or robotic devices based on closed kinematic chains, such as linkages and parallel manipulators. Examples are given to illustrate the application of the method to these mechanical systems.", "sha": "ebb6f78a2d3b57b98bd1ac50b551962f169bf808", "corpus_id": 6420886 }
{ "doc_id": "6601541", "title": "Breast self-examination practices and the effect of a planned training program in Western Turkey.", "abstract": "This cross-sectional and descriptive study was aimed to evaluate women breast self-examination (BSE) practice and effects of a planned educational programme for breast cancer and BSE. The samples of the study consisted 266 women. The study data were collected by a questionnaire in six months periods as two times in a month in which the periods were defined and announced to all women. After that all the women were informed about BSE. The statue of performing BSE of women (n=146) was evaluated. They were interviewed on phone after 6 months. The collected data were analyzed by using statistical SPSS program. The average age of women was 35.68±7.54. It is also determined that (61.3%) had no knowledge about BSE, (87.6%) had examined clinical breast examination (CBE) in a year and half of them (50.8%) never practiced BSE, (29.0%) had BSE regularly every month. Concerning the status of BSE practice before the education and after the education significant difference is found statistically (p<0.00). The significance of this study is that it is to give education about breast cancer and BSE for raising awareness among women.", "sha": "41be1660f69c5220e2dcec3fb3b6c14b7360b643", "corpus_id": 6601541 }
{ "doc_id": "13886394", "title": "Mixed Reality: A Survey", "abstract": "This chapter presents an overview of the Mixed Reality (MR) paradigm, which proposes to overlay our real-world environment with digital, computer-generated objects. It presents example applications and outlines limitations and solutions for their technical implementation. In MR systems, users perceive both the physical environment around them and digital elements presented through, for example, the use of semitransparent displays. By its very nature, MR is a highly interdisciplinary field engaging signal processing, computer vision, computer graphics, user interfaces, human factors, wearable computing, mobile computing, information visualization, and the design of displays and sensors. This chapter presents potential MR applications, technical challenges in realizing MR systems, as well as issues related to usability and collaboration in MR. It separately presents a section offering a selection of MR projects which have either been partly or fully undertaken at Swiss universities and rounds off with a section on current challenges and trends.", "sha": "6fa717574ecadf095615012c68b72d584fb45df4", "corpus_id": 13886394 }
{ "doc_id": "266177", "title": "Collaborative gaming in augmented reality", "abstract": "We introduce a local collaborative environment for gaming. In our setup multiple users can interact with the virtual game and the real surroundings at the same time. They are able to communicate with other players during the game. We describe an augmented reality setup for multiple users with see-trough head-mounted displays, allowing dedicated stereoscopic views and individualized interaction for each user. We use face-snapping for fast and precise direct object manipulation. With face snapping and the subdivision of the gaming space into spatial regions, the semantics of actions can be derived out of geometric actions of the user. Further, we introduce a layering concept allowing individual views onto the common data structure. The layer concept allows to make privacy management very easy by simply manipulating the common data structure. Moreover, assigning layers to spatial regions carefully, a special privacy management is often not necessary. Moving objects from one region into another will automatically change their visibility and privacy for each participant. We demonstrate our system with two example board-games: Virtual Roulette and MahJongg, both relying heavily on social communication and the need of a private space.", "sha": "4efe1b5f0e355164b1b1d91fdd25ccb0de5c9589", "corpus_id": 266177 }
{ "doc_id": "2441174", "title": "A Model and Architecture of REBOK (Requirements Engineering Body of Knowledge) and Its Evaluation", "abstract": "Requirements engineering has been extensively developed as a discipline. Many statistics on the software development indicate requirements process is the most influential to both success and failure of software development. However, practitioners are still difficult to learn and apply requirements engineering. As a guideline for practitioners to learn and apply requirements engineering, we developed REBOK (Requirements Engineering Body Of Knowledge). In the development, we found there is no common model of BOKs in software engineering. This article proposes the model and architecture of the body of knowledge of REBOK and its proof of the concept.", "sha": "73a8b38c959112429cdd1a2a8a9ad21d4126420f", "corpus_id": 2441174 }
{ "doc_id": "1012733", "title": "Evolution and classification of the CRISPR–Cas systems", "abstract": "The CRISPR–Cas (clustered regularly interspaced short palindromic repeats–CRISPR-associated proteins) modules are adaptive immunity systems that are present in many archaea and bacteria. These defence systems are encoded by operons that have an extraordinarily diverse architecture and a high rate of evolution for both the cas genes and the unique spacer content. Here, we provide an updated analysis of the evolutionary relationships between CRISPR–Cas systems and Cas proteins. Three major types of CRISPR–Cas system are delineated, with a further division into several subtypes and a few chimeric variants. Given the complexity of the genomic architectures and the extremely dynamic evolution of the CRISPR–Cas systems, a unified classification of these systems should be based on multiple criteria. Accordingly, we propose a 'polythetic' classification that integrates the phylogenies of the most common cas genes, the sequence and organization of the CRISPR repeats and the architecture of the CRISPR–cas loci.", "sha": "a210740eb8dfc357de4e4a46c00eb701b724f2fb", "corpus_id": 1012733 }
{ "doc_id": "3888761", "title": "CRISPR provides acquired resistance against viruses in prokaryotes.", "abstract": "Clustered regularly interspaced short palindromic repeats (CRISPR) are a distinctive feature of the genomes of most Bacteria and Archaea and are thought to be involved in resistance to bacteriophages. We found that, after viral challenge, bacteria integrated new spacers derived from phage genomic sequences. Removal or addition of particular spacers modified the phage-resistance phenotype of the cell. Thus, CRISPR, together with associated cas genes, provided resistance against phages, and resistance specificity is determined by spacer-phage sequence similarity.", "sha": "cd9a15423cda951cc329c682484eeb5902a4a97a", "corpus_id": 3888761 }
{ "doc_id": "17794223", "title": "Spectral-shape optimized FM noise radar for pulse agility", "abstract": "A non-repeating, spectrum-optimized FM waveform suitable for radar pulse agility is described. The waveform is an alternative manifestation of the recently developed pseudo-random optimized (PRO) FMCW framework for FM noise radar, now implemented in a pulsed form. Each pulsed FM waveform is iteratively optimized to match a spectrum shape with low range sidelobes and good spectral properties, while the random initialization for each waveform ensures sufficient diversity that the resulting optimized form is unique, thereby providing decoherence of range sidelobes. The performance of this pulse-agile structure is examined in simulation and subsequently demonstrated with measured results of static and moving targets.", "sha": "0ec6b458f40baf0676b6bb1c62611b5d5bef2645", "corpus_id": 17794223 }
{ "doc_id": "26205180", "title": "Industry 4.0 as a part of smart cities", "abstract": "In this paper, we propose the conjunction of the Smart City Initiative and the concept of Industry 4.0. The term smart city has been a phenomenon of the last years, which is very inflected especially since 2008 when the world was hit by the financial crisis. The main reasons for the emergence of the Smart City Initiative are to create a sustainable model for cities and preserve quality of life of their citizens. The topic of the smart city cannot be seen only as a technical discipline, but different economic, humanitarian or legal aspects must be involved as well. In the concept of Industry 4.0, the Internet of Things (IoT) shall be used for the development of so-called smart products. Sub-components of the product are equipped with their own intelligence. Added intelligence is used both during the manufacturing of a product as well as during subsequent handling, up to continuous monitoring of the product lifecycle (smart processes). Other important aspects of the Industry 4.0 are Internet of Services (IoS), which includes especially intelligent transport and logistics (smart mobility, smart logistics), as well as Internet of Energy (IoE), which determines how the natural resources are used in proper way (electricity, water, oil, etc.). IoT, IoS, IoP and IoE can be considered as an element that can create a connection of the Smart City Initiative and Industry 4.0 - Industry 4.0 can be seen as a part of smart cities. Interconnection of these systems can be expected to change - transport processes from design logistic processes through to their online optimization with respect to the chosen objective function and the latest information from the transport infrastructure. Linking information from process-based Industry 4.0 with intelligent transport systems of the smart city could create very effective, demand-oriented and higher productivity of manufacturing enterprises as well as sustainable development of society.", "sha": "123e84deec469e6ebbaacc8885ed30bcfe10bdab", "corpus_id": 26205180 }
{ "doc_id": "2964334", "title": "How Virtualization , Decentralization and Network Building Change the Manufacturing Landscape : An Industry 4 . 0 Perspective", "abstract": "The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the processand supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-PhysicalSystems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decisionmakers to assess their need for transformation towards Industry 4.0 practices. Keywords—Industry 4.0., Mass Customization, Production networks, Virtual Process-Chain. Malte Brettel, chairholder, is with the Aachen University (RWTH), Kackertstraße 7, 52072 Aachen (e-mail: brettel@win.rwth-aachen.de). Niklas Friederichsen is with the Aachen University (RWTH), Kackertstraße 7, 52072 Aachen, (corresponding author; phone: +49/(0)241 80 99397; e-mail: friederichsen@win.rwth-aachen.de). Michael Keller and Marius Rosenberg are with the Aachen University (RWTH), Kackertstraße 7, 52072 Aachen (e-mail: keller@win.rwthaachen.de, rosenberg@win.rwth-aachen.de).", "sha": "51f7f08c8e464d0942035c8dd4cc441ede9d336a", "corpus_id": 2964334 }
{ "doc_id": "4133482", "title": "Phishing Detection: A Literature Survey", "abstract": "This article surveys the literature on the detection of phishing attacks. Phishing attacks target vulnerabilities that exist in systems due to the human factor. Many cyber attacks are spread via mechanisms that exploit weaknesses found in end-users, which makes users the weakest element in the security chain. The phishing problem is broad and no single silver-bullet solution exists to mitigate all the vulnerabilities effectively, thus multiple techniques are often implemented to mitigate specific attacks. This paper aims at surveying many of the recently proposed phishing mitigation techniques. A high-level overview of various categories of phishing mitigation techniques is also presented, such as: detection, offensive defense, correction, and prevention, which we belief is critical to present where the phishing detection techniques fit in the overall mitigation process.", "sha": "6d7f9304cf5ae6b27376ba136b8dfc681a844993", "corpus_id": 4133482 }
{ "doc_id": "2358270", "title": "Simultaneous co-clustering and learning to address the cold start problem in recommender systems", "abstract": "Recommender Systems (RSs) are powerful and popular tools for e-commerce. To build their recommendations, RSs make use of varied data sources, which capture the characteristics of items, users, and their transactions. Despite recent advances in RS, the cold start problem is still a relevant issue that deserves further attention, and arises due to the lack of prior information about new users and new items. To minimize system degradation, a hybrid approach is presented that combines collaborative filtering recommendations with demographic information. The approach is based on an existing algorithm, SCOAL (Simultaneous Co-Clustering and Learning), and provides a hybrid recommendation approach that can address the (pure) cold start problem, where no collaborative information (ratings) is available for new users. Better predictions are produced from this relaxation of assumptions to replace the lack of information for the new user. Experiments using real-world datasets show the effectiveness of the", "sha": "34e5e40234c2b65031d499c6dae35039e5141f7d", "corpus_id": 2358270 }
{ "doc_id": "7212610", "title": "A social recommender mechanism for location-based group commerce", "abstract": "With the rapid growth of social media platforms, numerous group commerce websites, which exploit both the advantages of price discounts and experience value, have emerged. Moreover, the popularity of sophisticated mobile devices brings great commercial opportunities for local store to gain publicity. In this research, considering user preference, geographic convenience, and friends’ influence, a group-coupon recommender system is proposed for promoting location-sensitive products. The results of experiments conducted on Facebook indicate that the proposed mechanism could accurately recommend products and satisfactorily provide a companion list of to customers, significantly increasing willingness to purchase by taking advantage of the power of social influence. 2014 Elsevier Inc. All rights reserved.", "sha": "38e679092efee8e3ca9b2b10e6b147b279449930", "corpus_id": 7212610 }
{ "doc_id": "632612", "title": "Latent Class Models for Collaborative Filtering", "abstract": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.", "sha": "a7ef69e55244e3fa0b065746d596441103b293a5", "corpus_id": 632612 }
{ "doc_id": "15767595", "title": "The Analysis of Facial Beauty: An Emerging Area of Research in Pattern Analysis", "abstract": "Much research presented recently supports the idea that the human perception of attractiveness is dat adriven and largely irrespective of the perceiver. This suggests using pattern analysis techniques for beauty analysis. Several scientific papers on this subject are appearing in image processing, computer vision and pattern analysis contexts, or use techniques of these areas . In this paper, we will survey the recent studies on automatic analysis of facial beauty, and discuss research lin es and practical applications.", "sha": "f789b2b75a6f66bd211be5613802d105eb191f65", "corpus_id": 15767595 }
{ "doc_id": "2300568", "title": "Data-driven enhancement of facial attractiveness", "abstract": "When human raters are presented with a collection of shapes and asked to rank them according to their aesthetic appeal, the results often indicate that there is a statistical consensus among the raters. Yet it might be difficult to define a succinct set of rules that capture the aesthetic preferences of the raters. In this work, we explore a data-driven approach to aesthetic enhancement of such shapes. Specifically, we focus on the challenging problem of enhancing the aesthetic appeal (or the attractiveness) of human faces in frontal photographs (portraits), while maintaining close similarity with the original.\n The key component in our approach is an automatic facial attractiveness engine trained on datasets of faces with accompanying facial attractiveness ratings collected from groups of human raters. Given a new face, we extract a set of distances between a variety of facial feature locations, which define a point in a high-dimensional \"face space\". We then search the face space for a nearby point with a higher predicted attractiveness rating. Once such a point is found, the corresponding facial distances are embedded in the plane and serve as a target to define a 2D warp field which maps the original facial features to their adjusted locations. The effectiveness of our technique was experimentally validated by independent rating experiments, which indicate that it is indeed capable of increasing the facial attractiveness of most portraits that we have experimented with.", "sha": "73c4963d308bd54e61bc46544f4552aad5dd1d91", "corpus_id": 2300568 }
{ "doc_id": "13904313", "title": "Nonrigid 2D/3D Registration of Coronary Artery Models With Live Fluoroscopy for Guidance of Cardiac Interventions", "abstract": "A 2D/3D nonrigid registration method is proposed that brings a 3D centerline model of the coronary arteries into correspondence with bi-plane fluoroscopic angiograms. The registered model is overlaid on top of interventional angiograms to provide surgical assistance during image-guided chronic total occlusion procedures, thereby reducing the uncertainty inherent in 2D interventional images. The proposed methodology is divided into two parts: global structural alignment and local nonrigid registration. In both cases, vessel centerlines are automatically extracted from the 2D fluoroscopic images, and serve as the basis for the alignment and registration algorithms. In the first part, an energy minimization method is used to estimate a global affine transformation that aligns the centerline with the angiograms. The performance of nine general purpose optimizers has been assessed for this problem, and detailed results are presented. In the second part, a fully nonrigid registration method is proposed and used to compensate for any local shape discrepancy. This method is based on a variational framework, and uses a simultaneous matching and reconstruction process to compute a nonrigid registration. With a typical run time of less than 3 s, the algorithms are fast enough for interactive applications. Experiments on five different subjects are presented and show promising results.", "sha": "f4b220217511f5949f6338fd6c70545c497c44ef", "corpus_id": 13904313 }
{ "doc_id": "8776562", "title": "Chronic ulcerative stomatitis and lichen planus: just a coincidence or a direct link between the two diseases?", "abstract": "Chronic ulcerative dermatitis (CUS) is characterized by painful exacerbating and remitting oral erosions and ulcerations. A very characteristic direct immunofluorescence (DIF) pattern differentiates CUS from other immune-mediated oral vesiculo-erosive conditions. The clinical and histopathological features of CUS are very similar to erosive oral lichen planus. A middle-aged woman had CUS confirmed by DIF and chronic plantar ulceration. Histology of the lesion on the sole showed features of lichen planus (LP). DIF of the plantar lesion showed the same pattern as the oral lesion of CUS. The relationship between CUS and erosive (ulcerative) LP of the foot is discussed. Chloroquine improved the oral lesions while oral cyclosporine A ameliorated both oral and plantar lesions.", "sha": "846048276f42bb1675b4edcf523f933b049240bc", "corpus_id": 8776562 }
{ "doc_id": "5824586", "title": "Chronic ulcerative stomatitis: a distinct clinical entity?", "abstract": "Chronic ulcerative stomatitis (CUS) is a mucocutaneous disorder which is characterised by persistent oral mucosal ulceration. The clinical appearance is often reminiscent of oral lichen planus (OLP) leading to erroneous diagnoses. The immune mediated inhibition of the CUS protein (CUSP) is implicated in the pathogenesis of CUS. CUSP acts as an anti-apoptotic protein and when its action is prevented it may result in significant epithelial injury. The objective or this article is to present the first documented case of CUS in South Africa, with relevant reference to current international literature. CUS should be considered in patients previously diagnosed with OLP but who are unresponsive to glucocorticosteroid therapy. The condition can be successfully managed using hydroxychloroquine.", "sha": "21c32bb07aca6ea3f79dce7ef27c8fb937e99b93", "corpus_id": 5824586 }
{ "doc_id": "52920018", "title": "CRED: A Deep Residual Network of Convolutional and Recurrent Units for Earthquake Signal Detection", "abstract": "Earthquake signal detection is at the core of observational seismology. A good detection algorithm should be sensitive to small and weak events with a variety of waveform shapes, robust to background noise and non-earthquake signals, and efficient for processing large data volumes. Here, we introduce the CnnRnn Earthquake Detector (CRED), a detector based on deep neural networks. The network uses a combination of convolutional layers and bi-directional long-short-term memory units in a residual structure. It learns the timefrequency characteristics of the dominant phases in an earthquake signal from three component data recorded on a single station. We train the network using 500,000 seismograms (250k associated with tectonic earthquakes and 250k identified as noise) recorded in Northern California and tested it with a F-score of 99.95. The robustness of the trained model with respect to the noise level and non-earthquake signals is shown by applying it to a set of semi-synthetic signals. The model is applied to one month of continuous data recorded at Central Arkansas to demonstrate its efficiency, generalization, and sensitivity. Our model is able to detect more than 700 microearthquakes as small as -1.3 ML induced during hydraulic fracturing far away than the training region. The performance of the model is compared with STA/LTA, template matching, and FAST algorithms. Our results indicate an efficient and reliable performance of CRED. This framework holds great promise in lowering the detection threshold while minimizing false positive detection rates.", "sha": "6d67bb7124987b24b728b2bba47f43e87ca50df1", "corpus_id": 52920018 }
{ "doc_id": "13652959", "title": "Classification of Anomaly Based Intrusion Detection Systems in VANET", "abstract": "Article history: Received 13 June 2015 Accepted 28 July 2015 Available online 5 August 2015", "sha": "4c5581534b4daf8c28402999b67407f200beca4c", "corpus_id": 13652959 }
{ "doc_id": "14602748", "title": "ECPP: Efficient Conditional Privacy Preservation Protocol for Secure Vehicular Communications", "abstract": "In this paper, we introduce an efficient conditional privacy preservation (ECPP) protocol in vehicular ad hoc networks (VANETs) to address the issue on anonymous authentication for safety messages with authority traceability. The proposed protocol is characterized by the generation of on-the-fly short-time anonymous keys between on-board units (OBUs) and roadside units (RSUs), which can provide fast anonymous authentication and privacy tracking while minimizing the required storage for short-time anonymous keys. We demonstrate the merits gained by the proposed protocol through extensive analysis.", "sha": "42dd6cc25ae28a47ec1fcf4b5b982bd9a5a4feb5", "corpus_id": 14602748 }
{ "doc_id": "16952099", "title": "Application-driven dynamic vertical scaling of virtual machines in resource pools", "abstract": "Most modern hypervisors offer powerful resource control primitives such as reservations, limits, and shares for individual virtual machines (VMs). These primitives provide a means to dynamic vertical scaling of VMs in order for the virtual applications to meet their respective service level objectives (SLOs). VMware DRS offers an additional resource abstraction of a resource pool (RP) as a logical container representing an aggregate resource allocation for a collection of VMs. In spite of the abundant research on translating application performance goals to resource requirements, the implementation of VM vertical scaling techniques in commercial products remains limited. In addition, no prior research has studied automatic adjustment of resource control settings at the resource pool level. In this paper, we present AppRM, a tool that automatically sets resource controls for both virtual machines and resource pools to meet application SLOs. AppRM contains a hierarchy of virtual application managers and resource pool managers. At the application level, AppRM translates performance objectives into the appropriate resource control settings for the individual VMs running that application. At the resource pool level, AppRM ensures that all important applications within the resource pool can meet their performance targets by adjusting controls at the resource pool level. Experimental results under a variety of dynamically changing workloads composed by multi-tiered applications demonstrate the effectiveness of AppRM. In all cases, AppRM is able to deliver application performance satisfaction without manual intervention.", "sha": "04c0c3d9f08c77de9ba448d64825c4f556c2de99", "corpus_id": 16952099 }
{ "doc_id": "3000798", "title": "Integrated multilayered triboelectric nanogenerator for harvesting biomechanical energy from human motions.", "abstract": "We demonstrate a new flexible multilayered triboelectric nanogenerator (TENG) with extremely low cost, simple structure, small size (3.8 cm×3.8 cm×0.95 cm) and lightweight (7 g) by innovatively integrating five layers of units on a single flexible substrate. Owing to the unique structure and nanopore-based surface modification on the metal surface, the instantaneous short-circuit current (Isc) and the open-circuit voltage (Voc) could reach 0.66 mA and 215 V with an instantaneous maximum power density of 9.8 mW/cm2 and 10.24 mW/cm3. This is the first 3D integrated TENG for enhancing the output power. Triggered by press from normal walking, the TENG attached onto a shoe pad was able to instantaneously drive multiple commercial LED bulbs. With the flexible structure, the TENG can be further integrated into clothes or even attached onto human body without introducing sensible obstruction and discomfort to human motions. The novel design of TENG demonstrated here can be applied to potentially achieve self-powered portable electronics.", "sha": "aa0b03716596832e93f67f99d5c3fed553f89f44", "corpus_id": 3000798 }
{ "doc_id": "18598076", "title": "Energy scavenging for mobile and wireless electronics", "abstract": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.", "sha": "a64c79bbf6033e08ef5e72878c606b00d86cb5d9", "corpus_id": 18598076 }
{ "doc_id": "7935715", "title": "Use of Tetra-ammonium Tetrakis(4-Sulphonato)Phenyl Porphyrin for Pseudomonas and Bacillus Cell Imaging", "abstract": "The use of tetraammonium tetrakis(4-sulphonato)phenyl porphyrin (TPPS), a water-soluble anionic compound, as a stain to analyse bacterial cells using fluorescent microscopy was investigated. TPPS was effectively used to analyse two different bacteria: Pseudomonas aeruginosa and Bacillus cereus. The variation in brightness with varying concentrations of TPPS was studied. The patterns of variations for these bacteria were found to be the same, but with consistently higher brightness for Bacillus cereus.", "sha": "6bb7bf731d29732a0ff4c2c7a8e723386f189bc7", "corpus_id": 7935715 }
{ "doc_id": "40983165", "title": "Classifying Diabetic Retinopathy using Deep Learning Architecture", "abstract": "A recent development in the state-of-art technology machine learning plays a vital role in the image processing applications such as biomedical, satellite image processing, Artificial Intelligence such as object identification and recognition and so on. In Global, diabetic retinopathy suffered patients growing vastly. And the fact is earliest stage could not diagnoses to normal eye vision. Increasing necessity of finding a diabetic retinopathy as earliest would stops vision loss for prolonged diabetes patient although suffered youngs’. Severity of the diabetic retinopathy disease is based on a presence of microaneurysms, exudates, neovascularization, Haemorrhages. Experts are categorized those diabetic retinopathy in to five stages such as normal, mild, moderate, severe Nonproliferative(NPDR) or Proliferative diabetic retinopathy patient(PDR). A proposed deep learning approach such as Deep Convolutional Neural Network(DCNN) gives high accuracy in classification of these diseases through spatial analysis. A DCNN is more complex architecture inferred more from human visual perspects. Amongst other supervised algorithms involved, proposed solution is to find a better and optimized way to classifying the fundus image with little pre-processing techniques. Our proposed architecture deployed with dropout layer techniques yields around 94-96 percent accuracy. Also, it tested with popular databases such as STARE, DRIVE, kaggle fundus images datasets are available publicly.", "sha": "b2b28d4a7c024799b35b97d0a383898f9996b187", "corpus_id": 40983165 }
{ "doc_id": "18834969", "title": "Retinal Vessel Segmentation using Deep Neural Networks", "abstract": "Automatic segmentation of blood vessels in fundus images is of great importance as eye diseases as well as some systemic diseases cause observable pathologic modifications. It is a binary classification problem: for each pixel we consider two possible classes (vessel or non-vessel). We use a GPU implementation of deep max-pooling convolutional neural networks to segment blood vessels. We test our method on publiclyavailable DRIVE dataset and our results demonstrate the high effectiveness of the deep learning approach. Our method achieves an average accuracy and AUC of 0.9466 and 0.9749, respectively.", "sha": "7f0a5ec34054fed46d2bcf3e2f94351fafc404c1", "corpus_id": 18834969 }
{ "doc_id": "1614187", "title": "Distributed Submodular Maximization", "abstract": "Many large-scale machine learning problems–clustering, non-parametric learning, kernel machines, etc.–require selecting a small yet representative subset from a large dataset. Such problems can often be reduced to maximizing a submodular set function subject to various constraints. Classical approaches to submodular optimization require centralized access to the full dataset, which is impractical for truly large-scale problems. In this paper, we consider the problem of submodular function maximization in a distributed fashion. We develop a simple, two-stage protocol GreeDi, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and show that under certain natural conditions, performance close to the centralized approach can be achieved. We begin with monotone submodular maximization subject to a cardinality constraint, and then extend this approach to obtain approximation guarantees for (not necessarily monotone) submodular maximization subject to more general constraints including matroid or knapsack constraints. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar based clustering on tens of millions of examples using Hadoop.", "sha": "3ec5afaee732157a1039d25b953aec38bc151638", "corpus_id": 1614187 }
{ "doc_id": "1139867", "title": "Human-robot embodied interaction in hallway settings: a pilot user study", "abstract": "This paper explores the problem of embodied interaction between a service robot and a person in a hallway setting. For operation in environments with people that have limited experience with robots, a behaviour that signals awareness of the persons and safety of motion is essential. A control strategy based on human spatial behaviour studies is presented that adopts human-robot interaction patterns similar to those used in person-person encounters. The results of a pilot study with human subjects are presented in which the users have evaluated the acceptability of the robot behaviour patterns during passage with respect to three basic parameters: the robot speed, the signaling distance at which the robot starts the maneuver and the lateral distance from the person for safe passage. The study has shown a good overall user response and has provided some useful indications on how to design a hallway passage behaviour that could be most acceptable to human users.", "sha": "3bebe672b9a5f89fef2d3fe4f57d1df0f70ea5a1", "corpus_id": 1139867 }
{ "doc_id": "7896306", "title": "Social and collaborative aspects of interaction with a service robot", "abstract": "To an increasing extent, robots are being designed to become a part of the lives of ordinary people. This calls for new models of the interaction between humans and robots, taking advantage of human social and communicative skills. Furthermore, human-robot relationships must be understood in the context of use of robots, and based on empirical studies of humans and robots in real settings. This paper discusses human interaction with a service robot, departing from our experiences of designing a fetch-and-carry robot for motion-impaired users in an office environment. We present the motivations behind the design of the robot and its interfaces, experiences from a recent field study, and research issues emerging from this work. A conclusion is that addressing only the primary user in service robotics is unsatisfactory, and that the focus should be on the setting, activities and social interactions of the group of people where the robot must fit in.", "sha": "33985757686a2e643ff6eba0070269dab711054f", "corpus_id": 7896306 }
{ "doc_id": "18511127", "title": "An inexpensive scheme for calibration of a colour monitor in terms of CIE standard coordinates", "abstract": "The Commission Internationale d'Eclairage system of colorimetry is a method of measuring colours that has been standardized, and is widely used by industries involved with colour. Knowing the CIE coordinates of a colour allows it to be reproduced easily and exactly in many different media. For this reason graphics installations which utilize colour extensively ought to have the capability of knowing the CIE coordinates of displayed colours, and of displaying colours of given CIE coordinates. Such a capability requires a function which transforms video monitor gun voltages (RGB colour space) into CIE coordinates (XYZ colour space), and vice versa. The function incorporates certain monitor parameters. The purpose of this paper is to demonstrate the form that such a function takes, and to show how the necessary monitor parameters can be measured using little more than a simple light meter. Because space is limited, and because each user is likely to implement the calibration differently, few technical details are given, but principles and methods are discussed in sufficient depth to allow the full use of the system. In addition, several visual checks which can be used for quick verification of the integrity of the calibration are described.\n The paper begins with an overview of the CIE system of colorimetry. It continues with a general discussion of transformations from RGB colour space to XYZ colour space, after which a detailed step-by-step procedure for monitor calibration is presented.", "sha": "b3584a2ed0ca9c48f52a75522b63ab6142202254", "corpus_id": 18511127 }
{ "doc_id": "15263786", "title": "Topology optimization for galvanic coupled wireless intra-body communication", "abstract": "Implanted sensors and actuators in the human body promise in-situ health monitoring and rapid advancements in personalized medicine. We propose a new paradigm where such implants may communicate wirelessly through a technique called as galvanic coupling, which uses weak electrical signals and the conduction properties of body tissues. While galvanic coupling overcomes the problem of massive absorption of RF waves in the body, the unique intra-body channel raises several questions on the topology of the implants and the external (i.e., on skin) data collection nodes. This paper makes the first contributions towards (i) building an energy-efficient topology through optimal placement of data collection points/relays using measurement-driven tissue channel models, and (ii) balancing the energy consumption over the entire implant network so that the application needs are met. We achieve this via a two-phase iterative clustering algorithm for the implants and formulate an optimization problem that decides the position of external data-gathering points. Our theoretical results are validated via simulations and experimental studies on real tissues, with demonstrated increase in the network lifetime.", "sha": "916451b78a20b123287591de64b68a43c40971be", "corpus_id": 15263786 }
{ "doc_id": "9830896", "title": "A Monte Carlo algorithm for fast projective clustering", "abstract": "We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images.", "sha": "2754769f30f4d347396a4b0016b9d48eff91a9f9", "corpus_id": 9830896 }
{ "doc_id": "50358421", "title": "3. Fundamentals of Voltage Stabilization for Flying Capacitor 3. Fundamental Voltage Balancing Analysis for Flying Capacitors 3.1 Introduction 3.2 Fundamental Issues of Voltage Balancing 3.2.1 Flying Capacitors", "abstract": "In this chapter, fundamentals of the proposed soft-switching multilevel inverter for active power filter applications are investigated and discussed with the voltage balancing for flying capacitors. Voltage balancing in the flying capacitor multilevel inverter (FCMI) is very important in maintaining the voltage balancing between the flying capacitors for a safe operation. This means that the flying capacitor voltage should be stable at any operational conditions. For that purpose, the inverter theoretically requires the symmetric switching of control signals with a phase shifting. In spite of the symmetric control scheme, a voltage unbalance of the flying capacitor in practical implementations may be observed due to unequal parameters of the inverter caused by 1) different IGBT tolerance, 2) different dv/dt, 3) different value of Cf, etc [D33]. In this study, voltage balancing of the flying capacitor as well as the DC link capacitors under soft-switching operations is characterized through both simulation and experimentation.", "sha": "cf4f9c9707c660d0163f6c61dafae88f05920091", "corpus_id": 50358421 }
{ "doc_id": "7384959", "title": "Data processing and semantics for advanced internet of things (IoT) applications: modeling, annotation, integration, and perception", "abstract": "This tutorial presents tools and techniques for effectively utilizing the Internet of Things (IoT) for building advanced applications, including the Physical-Cyber-Social (PCS) systems. The issues and challenges related to IoT, semantic data modelling, annotation, knowledge representation (e.g. modelling for constrained environments, complexity issues and time/location dependency of data), integration, analysis, and reasoning will be discussed. The tutorial will describe recent developments on creating annotation models and semantic description frameworks for IoT data (e.g. such as W3C Semantic Sensor Network ontology). A review of enabling technologies and common scenarios for IoT applications from the data and knowledge engineering point of view will be discussed. Information processing, reasoning, and knowledge extraction, along with existing solutions related to these topics will be presented. The tutorial summarizes state-of-the-art research and developments on PCS systems, IoT related ontology development, linked data, domain knowledge integration and management, querying large-scale IoT data, and AI applications for automated knowledge extraction from real world data.", "sha": "773321cf179b499d81dd026c95703857bf993cb8", "corpus_id": 7384959 }
{ "doc_id": "2630520", "title": "The Internet of Things: A survey", "abstract": "This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. 2010 Elsevier B.V. All rights reserved.", "sha": "2b00e526490d65f2ec00107fb7bcce0ace5960c7", "corpus_id": 2630520 }
{ "doc_id": "52015330", "title": "All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey", "abstract": "With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of securitycritical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.", "sha": "e0dff5f27ae278eccde8b722347c4d5dcabe8769", "corpus_id": 52015330 }
{ "doc_id": "14142552", "title": "Analysis of fisher information and the Cramer-Rao bound for nonlinear parameter estimation after compressed sensing", "abstract": "In this paper, we analyze the impact of compressed sensing with random matrices on Fisher information and the CRB for estimating unknown parameters in the mean value function of a multivariate normal distribution. We consider the class of random compression matrices that satisfy a version of the Johnson-Lindenstrauss lemma, and we derive analytical lower and upper bounds on the CRB for estimating parameters from randomly compressed data. These bounds quantify the potential loss in CRB as a function of Fisher information of the non-compressed data. In our numerical examples, we consider a direction of arrival estimation problem and compare the actual loss in CRB with our bounds.", "sha": "1849314e1f9ae95f9a1f89e29afeac3a6ea8da21", "corpus_id": 14142552 }
{ "doc_id": "2269521", "title": "Compressive sampling", "abstract": "Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science. Mathematics Subject Classification (2000). Primary 00A69, 41-02, 68P30; Secondary 62C65.", "sha": "6f14a338e8837fae059cab41064155cd84cb9cd5", "corpus_id": 2269521 }
{ "doc_id": "1386980", "title": "Being accurate is not enough: how accuracy metrics have hurt recommender systems", "abstract": "Recommender systems have shown great potential to help users find interesting and relevant items from within a large information space. Most research up to this point has focused on improving the accuracy of recommender systems. We believe that not only has this narrow focus been misguided, but has even been detrimental to the field. The recommendations that are most accurate according to the standard metrics are sometimes not the recommendations that are most useful to users. In this paper, we propose informal arguments that the recommender community should move beyond the conventional accuracy metrics and their associated experimental methodologies. We propose new user-centric directions for evaluating recommender systems.", "sha": "e84e9f82f49eb5a21cdf2306e42478773ff9e82a", "corpus_id": 1386980 }
{ "doc_id": "16482613", "title": "Opinion mining of customer feedback data on the web", "abstract": "As people leave on the Web their opinions on products and services they have used, it has become important to develop methods of (semi-)automatically classifying and gauging them. The task of analyzing such data, collectively called customer feedback data, is known as opinion mining. Opinion mining consists of several steps, and multiple techniques have been proposed for each step. In this paper, we survey and analyze various techniques that have been developed for the key tasks of opinion mining. On the basis of our survey and analysis of the techniques, we provide an overall picture of what is involved in developing a software system for opinion mining.", "sha": "5258cc875aeddb05c9323fb86a3411b25844f6a1", "corpus_id": 16482613 }
{ "doc_id": "5842708", "title": "A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization", "abstract": "The Rocchio relevance feedback algorithm is one of the most popular and widely applied learning methods from information retrieval. Here, a probabilistic analysis of this algorithm is presented in a text categorization framework. The analysis gives theoretical insight into the heuristics used in the Rocchio algorithm, particularly the word weighting scheme and the similarity metric. It also suggests improvements which lead to a probabilistic variant of the Rocchio classi er. The Rocchio classi er, its probabilistic variant, and a naive Bayes classi er are compared on six text categorization tasks. The results show that the probabilistic algorithms are preferable to the heuristic Rocchio classi er not only because they are more well-founded, but also because they achieve better performance.", "sha": "094fc15bc058b0d62a661a1460885a9490bdb1bd", "corpus_id": 5842708 }
{ "doc_id": "14764558", "title": "Semantic distance in WordNet: An experimental, application-oriented evaluation of five measures", "abstract": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.", "sha": "402200052fe06aa43e54d40a73168bf7c9619bac", "corpus_id": 14764558 }
{ "doc_id": "30917288", "title": "Substrate integrated waveguide 180-degree narrow-wall directional coupler", "abstract": "Substrate integrated waveguide (SIW) is a new type of guided wave structure which could be implemented in both microwave- and millimeter wave integrated circuits. The study combines narrow-wall directional coupler with slow wave structure in SIW technique and achieved an over 5.4% bandwidth with /spl plusmn/0.3dB power equality, -20 dB isolation and 180 degrees phase shift between the output- and coupled ports as well for a coupler of length of 0.59. Its low cost and small size make it attractive for many narrowband applications.", "sha": "3443636ad5e5253bf91d119ca0aac0d31995db73", "corpus_id": 30917288 }
{ "doc_id": "8072549", "title": "Dispersion characteristics of substrate integrated rectangular waveguide", "abstract": "Dispersion properties of the substrate integrated rectangular waveguide (SIRW) are rigorously obtained using the BI-RME method combined with the Floquet's theorem. Our analysis shows that the SIRW basically has the same guided-wave characteristics as the conventional rectangular waveguide. Empirical equations are derived from the calculated dispersion curves in order to estimate the cutoff frequency of the first two dominant modes of the SIRW To validate the analysis results, an SIRW guide was designed and measured. Very good agreements between the experimental and theoretical results were obtained.", "sha": "aaf7be312dd3d22b032cbbf9530ad56bf8e7800b", "corpus_id": 8072549 }
{ "doc_id": "16593034", "title": "A framework for cloud-based context-aware information services for citizens in smart cities", "abstract": "In the context of smart cities, public participation and citizen science are key ingredients for informed and intelligent planning decisions and policy-making. However, citizens face a practical challenge in formulating coherent information sets from the large volumes of data available to them. These large data volumes materialise due to the increased utilisation of information and communication technologies in urban settings and local authorities’ reliance on such technologies to govern urban settlements efficiently. To encourage effective public participation in urban governance of smart cities, the public needs to be facilitated with the right contextual information about the characteristics and processes of their urban surroundings in order to contribute to the aspects of urban governance that affect them such as socio-economic activities, quality of life, citizens well-being etc. The cities on the other hand face challenges in terms of crowd sourcing with quality data collection and standardisation, services inter-operability, provisioning of computational and data storage infrastructure. In this paper, we highlight the issues that give rise to these multi-faceted challenges for citizens and public administrations of smart cities, identify the artefacts and stakeholders involved at both ends of the spectrum (data/service producers and consumers) and propose a conceptual framework to address these challenges. Based upon this conceptual framework, we present a Cloud-based architecture for context-aware citizen services for smart cities and discuss the components of the architecture through a common smart city scenario. A proof of concept implementation of the proposed architecture is also presented and evaluated. The results show the effectiveness of the cloud-based infrastructure for the development of a contextual service for citizens.", "sha": "0f72f7ac150bdf160c44f2f48151d03f05e1c329", "corpus_id": 16593034 }
{ "doc_id": "16084014", "title": "The effects of sleep restriction and extension on school-age children: what a difference an hour makes.", "abstract": "This study assessed the effects of modest sleep restriction and extension on children's neurobehavioral functioning (NBF). The sleep of 77 children (age: M = 10.6 years; range = 9.1-12.2 years) was monitored for 5 nights with activity monitors. These children (39 boys and 38 girls) were all attending regular 4th- and 6th-grade classes. Their NBF was assessed using computerized tests on the 2nd day of their normal sleep schedule. On the 3rd evening, the children were asked to extend or restrict their sleep by an hour on the following 3 nights. Their NBF was reassessed on the 6th day following the experimental sleep manipulation. Sleep restriction led to improved sleep quality and to reduced reported alertness. The sleep manipulation led to significant differential effects on NBF measures. These effects may have significant developmental and clinical implications.", "sha": "9140e15dc3419da86debe50365a52c87125937ad", "corpus_id": 16084014 }
{ "doc_id": "19846816", "title": "Effects of sleep deprivation on performance: a meta-analysis.", "abstract": "To quantitatively describe the effects of sleep loss, we used meta-analysis, a technique relatively new to the sleep research field, to mathematically summarize data from 19 original research studies. Results of our analysis of 143 study coefficients and a total sample size of 1.932 suggest that overall sleep deprivation strongly impairs human functioning. Moreover, we found that mood is more affected by sleep deprivation than either cognitive or motor performance and that partial sleep deprivation has a more profound effect on functioning than either long-term or short-term sleep deprivation. In general, these results indicate that the effects of sleep deprivation may be underestimated in some narrative reviews, particularly those concerning the effects of partial sleep deprivation.", "sha": "ff5c3b48e4a2c46de00839c86b3322735d42a907", "corpus_id": 19846816 }
{ "doc_id": "32564733", "title": "Automatic sleep/wake identification from wrist activity.", "abstract": "The purpose of this study was to develop and validate automatic scoring methods to distinguish sleep from wakefulness based on wrist activity. Forty-one subjects (18 normals and 23 with sleep or psychiatric disorders) wore a wrist actigraph during overnight polysomnography. In a randomly selected subsample of 20 subjects, candidate sleep/wake prediction algorithms were iteratively optimized against standard sleep/wake scores. The optimal algorithms obtained for various data collection epoch lengths were then prospectively tested on the remaining 21 subjects. The final algorithms correctly distinguished sleep from wakefulness approximately 88% of the time. Actigraphic sleep percentage and sleep latency estimates correlated 0.82 and 0.90, respectively, with corresponding parameters scored from the polysomnogram (p < 0.0001). Automatic scoring of wrist activity provides valuable information about sleep and wakefulness that could be useful in both clinical and research applications.", "sha": "7a40eb26769989cc233528dc50e29360de8d7125", "corpus_id": 32564733 }
{ "doc_id": "477428", "title": "A new solution to the gamer’s dilemma", "abstract": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.", "sha": "f376b2e949e98c733d0365d17eccb824e950f615", "corpus_id": 477428 }
{ "doc_id": "207743525", "title": "Is it wrong to play violent video games?", "abstract": "Many people have a strong intuition that there is something morallyobjectionable about playing violent video games, particularly withincreases in the number of people who are playing them and the games'alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethicaltheories to analyze the possibility that there might be some philosophicalfoundation for these intuitions. I raise the broader question of whetheror not participating in authentic simulations of immoral acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian hassubstantial objections to violent game playing, although they offersome important insights into playing games in general and what it ismorally to be a ``good sport.'' The Aristotelian, however, has a plausibleand intuitive way to protest participation in authentic simulations ofviolent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live afulfilled eudaimonic life.", "sha": "de7343874c2614a122f0fec20a78717a9593a6c6", "corpus_id": 207743525 }
{ "doc_id": "207195759", "title": "Bias in Computer Systems", "abstract": "From an analysis of actual cases, three categories of bias in computer systems have been developed: preexisting, technical, and emergent. Preexisting bias has its roots in social institutions, practices, and attitudes. Technical bias arises from technical constraints of considerations. Emergent bias arises in a context of use. Although others have pointed to bias inparticular computer systems and have noted the general problem, we know of no comparable work that examines this phenomenon comprehensively and which offers a framework for understanding and remedying it. We conclude by suggesting that freedom from bias should by counted amoung the select set of criteria—including reliability, accuracy, and efficiency—according to which the quality of systems in use in society should be judged.", "sha": "2e85ebac82315863cd1b0426de96b0d641839e1a", "corpus_id": 207195759 }
{ "doc_id": "49557534", "title": "A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems", "abstract": "Automatic post-editing (APE) systems aim to correct the systematic errors made by machine translators. In this paper, we propose a neural APE system that encodes the source (src) and machine translated (mt) sentences with two separate encoders, but leverages a shared attention mechanism to better understand how the two inputs contribute to the generation of the post-edited (pe) sentences. Our empirical observations have showed that when the mt is incorrect, the attention shifts weight toward tokens in the src sentence to properly edit the incorrect translation. The model has been trained and evaluated on the official data from the WMT16 and WMT17 APE IT domain English-German shared tasks. Additionally, we have used the extra 500K artificial data provided by the shared task. Our system has been able to reproduce the accuracies of systems trained with the same data, while at the same time providing better interpretability.", "sha": "738723e30f5b0a98e82b27209a563c1b2ca5bae9", "corpus_id": 49557534 }
{ "doc_id": "12771016", "title": "Attention Strategies for Multi-Source Sequence-to-Sequence Learning", "abstract": "Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.", "sha": "7302ffd9387199ece08b2b3510ed0467d74a41db", "corpus_id": 12771016 }
{ "doc_id": "9422035", "title": "Neural networks: a pattern recognition perspective", "abstract": null, "sha": "7b1ec59bf6707fe97261dbaffbd4643dc36f4a95", "corpus_id": 9422035 }
{ "doc_id": "15769256", "title": "Real-Time High-Resolution Sparse Voxelization with Application to Image-Based Modeling", "abstract": "We present a system for real-time, high-resolution, sparse voxelization of an image-based surface model. Our approach consists of a coarse-to-fine voxel representation and a collection of parallel processing steps. Voxels are stored as a list of unsigned integer triples. An oracle kernel decides, for each voxel in parallel, whether to keep or cull its voxel from the list based on an image consistency criterion of its projection across cameras. After a prefix sum scan, kept voxels are subdivided and the process repeats until projected voxels are pixel size. These voxels are drawn to a render target and shaded as a weighted combination of their projections into a set of calibrated RGB images. We apply this technique to the problem of smooth visual hull reconstruction of human subjects based on a set of live image streams. We demonstrate that human upper body shapes can be reconstructed to giga voxel resolution at greater than 30 fps on modern graphics hardware.", "sha": "4293901383fad784b0471a00515c924727663848", "corpus_id": 15769256 }
{ "doc_id": "3345516", "title": "KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera", "abstract": "KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.", "sha": "02f872de0dc3f1d54ba68f9d751b7828f64d189c", "corpus_id": 3345516 }
{ "doc_id": "13252593", "title": "Split-Plot Designs : What , Why , and How", "abstract": "T HIS provocative remark has been attributed to the famous industrial statistician, Cuthbert Daniel, by Box et al. (2005) in their well-known text on the design of experiments. Split-Plot experiments were invented by Fisher (1925) and their importance in industrial experimentation has been long recognized (Yates (1936)). It is also well known that many industrial experiments are fielded as split-plot experiments and yet erroneously analyzed as if they were completely randomized designs. This is frequently the case when hard-to-change factors exist and economic constraints preclude the use of complete randomization. Recent work, most notably by Lucas and his coworkers (Anbari and Lucas (1994), Ganju and Lucas (1997, 1999, 2005), Ju and Lucas (2002), Webb et al. (2004)) has demonstrated that many experi-", "sha": "413c4d91d597c74b384c0cc0a7810eb1fc68aec2", "corpus_id": 13252593 }
{ "doc_id": "4890857", "title": "Dual Policy Iteration", "abstract": "A novel class of Approximate Policy Iteration (API) algorithms have recently demonstrated impressive practical performance (e.g., ExIt [1], AlphaGo-Zero [2]). This new family of algorithms maintains, and alternately optimizes, two policies: a fast, reactive policy (e.g., a deep neural network) deployed at test time, and a slow, non-reactive policy (e.g., Tree Search), that can plan multiple steps ahead. The reactive policy is updated under supervision from the non-reactive policy, while the non-reactive policy is improved via guidance from the reactive policy. In this work we study this class of Dual Policy Iteration (DPI) strategy in an alternating optimization framework and provide a convergence analysis that extends existing API theory. We also develop a special instance of this framework which reduces the update of non-reactive policies to model-based optimal control using learned local models, and provides a theoretically sound way of unifying model-free and model-based RL approaches with unknown dynamics. We demonstrate the efficacy of our approach on various continuous control Markov Decision Processes.", "sha": "49b42aa77b764af561d63aee591e114c6dc03d8b", "corpus_id": 4890857 }
{ "doc_id": "3961213", "title": "Simple random search provides a competitive approach to reinforcement learning", "abstract": "A common belief in model-free reinforcement learning is that methods based on random search in the parameter space of policies exhibit significantly worse sample complexity than those that explore the space of actions. We dispel such beliefs by introducing a random search method for training static, linear policies for continuous control problems, matching state-ofthe-art sample efficiency on the benchmark MuJoCo locomotion tasks. Our method also finds a nearly optimal controller for a challenging instance of the Linear Quadratic Regulator, a classical problem in control theory, when the dynamics are not known. Computationally, our random search algorithm is at least 15 times more efficient than the fastest competing model-free methods on these benchmarks. We take advantage of this computational efficiency to evaluate the performance of our method over hundreds of random seeds and many different hyperparameter configurations for each benchmark task. Our simulations highlight a high variability in performance in these benchmark tasks, suggesting that commonly used estimations of sample efficiency do not adequately evaluate the performance of RL algorithms.", "sha": "abc8415a73e9056fdd8a7bf529b2c86898f29501", "corpus_id": 3961213 }
{ "doc_id": "1260973", "title": "Use of platelet-rich fibrin in regenerative dentistry: a systematic review", "abstract": "Research across many fields of medicine now points towards the clinical advantages of combining regenerative procedures with platelet-rich fibrin (PRF). This systematic review aimed to gather the extensive number of articles published to date on PRF in the dental field to better understand the clinical procedures where PRF may be utilized to enhance tissue/bone formation. Manuscripts were searched systematically until May 2016 and separated into the following categories: intrabony and furcation defect regeneration, extraction socket management, sinus lifting procedures, gingival recession treatment, and guided bone regeneration (GBR) including horizontal/vertical bone augmentation procedures. Only human randomized clinical trials were included for assessment. In total, 35 articles were selected and divided accordingly (kappa = 0.94). Overall, the use of PRF has been most investigated in periodontology for the treatment of periodontal intrabony defects and gingival recessions where the majority of studies have demonstrated favorable results in soft tissue management and repair. Little to no randomized clinical trials were found for extraction socket management although PRF has been shown to significantly decrease by tenfold dry sockets of third molars. Very little to no data was available directly investigating the effects of PRF on new bone formation in GBR, horizontal/vertical bone augmentation procedures, treatment of peri-implantitis, and sinus lifting procedures. Much investigation now supports the use of PRF for periodontal and soft tissue repair. Despite this, there remains a lack of well-conducted studies demonstrating convincingly the role of PRF during hard tissue bone regeneration. Future human randomized clinical studies evaluating the use of PRF on bone formation thus remain necessary. PRF was shown to improve soft tissue generation and limit dimensional changes post-extraction, with little available data to date supporting its use in GBR.", "sha": "bb1fba133709e636eb76daed5c02cb6ce9191490", "corpus_id": 1260973 }
{ "doc_id": "32226166", "title": "Enriching in-person encounters through social media: A study on family connectedness for the elderly", "abstract": "Social media sites have become immensely popular. In 2010 it was estimated that Americans spent a quarter of their online time using social networking sites (SNSs) and blogs. Prior studies have shown how people spend more time socializing through digital communication services such as SNSs reducing faceto-face interaction. Individuals limited offline interactions cause a sense of self-perception of being less is able to adapt and move the SNS's social capital outside the desktop into a domestic setting to support older adults' offline interactions with their family. The findings of a 21 week deployment study uncovered the offline practices surrounding the use of Tlatoque and its social implications toward the existing family ties (n1⁄430). Results qualitatively indicate that the content shared in SNSs strengthens older adults' social network by enriching and complementing traditional social engagements such as those conducted over the phone or in-person. & 2013 Elsevier Ltd. All rights reserved.", "sha": "f225904ad023c99fd5f3a1fc4bf0bcb1868beb52", "corpus_id": 32226166 }
{ "doc_id": "3207591", "title": "Connection strategies: Social capital implications of Facebook-enabled communication practices", "abstract": "This study assesses whether Facebook users have different ‘connection strategies,’ a term which describes a suite of Facebook-related relational communication activities, and explores the relationship between these connection strategies and social capital. Survey data (N = 450) from a random sample of undergraduate students reveal that only social information-seeking behaviors contribute to perceptions of social capital; connection strategies that focus on strangers or close friends do not. We also find that reporting more ‘actual’ friends on the site is predictive of social capital, but only to a point. We believe the explanation for these findings may be that the identity information in Facebook serves as a social lubricant, encouraging individuals to convert latent to weak ties and enabling them to broadcast requests for support or information.", "sha": "b6791cb29d07b8f65a34bf7add6e9455c226dba1", "corpus_id": 3207591 }
{ "doc_id": "16784421", "title": "Signals in Social Supernets", "abstract": "Social network sites (SNSs) provide a new way to organize and navigate an egocentric social network. Are they a fad, briefly popular but ultimately useless? Or are they the harbingers of a new and more powerful social world, where the ability to maintain an immense network—a social \"supernet\"—fundamentally changes the scale of human society? This article presents signaling theory as a conceptual framework with which to assess the transformative potential of SNSs and to guide their design to make them into more effective social tools. It shows how the costs associated with adding friends and evaluating profiles affect the reliability of users' self-presentation; examines strategies such as information fashion and risk-taking; and shows how these costs and strategies affect how the publicly-displayed social network aids the establishment of trust, identity, and cooperation—the essential foundations for an expanded social world. Grooming, Gossip, and Online Friending Social ties provide many benefits, including companionship, access to information, and emotional and material support (Granovetter, 1983; Wellman, Garton, & Haythornthwaite, 1997; Wellman & Gulia, 1999). Increasing the number of ties increases access to these benefits, although time and cognitive constraints preclude indefinite expansions of one's personal network. Yet if maintaining ties were to become more temporally efficient and cognitively effective, it should be possible to increase the scale of one's social world—to create a \"supernet\" with many more ties than is feasible without socially assistive tools. The question this article addresses is whether social network sites (SNSs) are a technology that can bring this about. In the wild, apes groom each other, picking through fur to remove parasitic bugs. This behavior helps with hygiene and is relaxing and pleasant for the recipient. Perhaps most importantly, it establishes social bonds: Apes who groom each other are more likely to help each other and not fight. Long grooming sessions are time consuming, however. Since the ape must also spend many hours finding food, sleeping, etc., it is clear that grooming can sustain only a limited number of relationships (Dunbar, 1996). In Grooming, Gossip, and the Evolution of Language, Robin Dunbar (1996) argued eloquently that in human societies, language, especially gossip, has taken over the social function of grooming. Instead of removing lice from each other's hair, people check in with friends and colleagues, ask how they are doing, and exchange a few words about common acquaintances, the news, or the local sports team (Dunbar, 1996, 2004). Language is much more efficient than physical grooming, for one can talk to several people at once. Language also helps people learn about cultural norms, evaluate others' behavior, and keep up with the news and shifting opinions of their surrounding community. It makes reputation possible—individuals benefit from the experience of others in determining who is nice, who does good Signals in Social Supernets http://jcmc.indiana.edu/vol13/issue1/donath.html 2 of 19 4/7/2008 6:26 PM work, and who should be shunned for their dishonest ways. Using language to maintain ties and manage trust, people can form and manage more complex and extensive social networks.1 Communication technologies expand human social reach (Horrigan, Boase, Rainie, & Wellman, 2006). Email makes communication more efficient: Sending a message to numerous recipients is as easy as sending it to one, and its asynchrony means that there is little need to coordinate interaction. Contact management tools, from paper Rolodexes to complex software systems, increase one's ability to remember large numbers of people (Whittaker, Jones, & Terveen 2002). While these technologies provide some of the support an expanded social world needs, they alone are not sufficient. People need to be able to keep track of ever-changing relationships (Dunbar, 1996; Nardi, Whittaker, Isaacs, Creech, Johnson, & Hainsworth, 2002), to see people within the context of their social relationships (Raub & Weesie, 1990), and, most fundamentally, to know whom to trust (Bacharach & Gambetti, 2001; Good, 2000). Email and contact tools help maintain an expanded collection of individual relationships. Are social network sites the solution for placing these relationships into the greater social context? A page on MySpace, filled with flashing logos, obscure comments, poorly focused yet revealing photos, and laced with twinkling animated gifs, may not look to the casual observer like the harbinger of the next stage in human social evolution. But perhaps it is. SNSs locate people in the context of their acquaintances, provide a framework for maintaining an extensive array of friends and other contacts, and allow for the public display of interpersonal commentary (boyd & Ellison, this issue). At the same time, SNSs are still primitive; it is too early in their development to observe clear evidence that they have transformed society. The goal of this article is to present a theoretical framework with which to a) assess the transformative potential of SNSs and b) develop design guidelines for making them into more effective social tools. The foundation for this analysis is signaling theory, which models why some communications are reliably honest and others are not. The argument begins with an introduction to signaling theory. The next section uses this theory to examine how the fundamental structure of SNSs can bring greater trust and reliability to online self-presentations, how specific site design decisions enhance or weaken their trust-conferring ability, and how seemingly pointless or irrational behaviors, such as online fashion and risk taking, actually signal social information. The final section examines the transformative possibilities of social supernets—not only whether SNSs may bring them about, but if so, in what ways they might change our society. An emphasis of this article is on ways of achieving reliable information about identity and affiliations. There are situations where ephemeral, hidden, or multiple identities are desirable. However, minimal online identity has been easy to create, while it is harder to establish more grounded identities in a fluid and nuanced way. A primary goal of this article is to understand how reliability is encouraged or enforced. For designers of future systems such knowledge is a tool, not a blueprint. Depending on the situation, they should choose the appropriate space between anonymous and whimsical and verified and trustworthy identities and communication.", "sha": "8392a12398718f73df004d742d8edf60477c723a", "corpus_id": 16784421 }
{ "doc_id": "216145151", "title": "PreFix: Switch Failure Prediction in Datacenter Networks", "abstract": "In modern datacenter networks (DCNs), failures of network devices are the norm rather than the exception, and many research efforts have focused on dealing with failures after they happen. In this paper, we take a different approach by predicting failures, thus the operators can intervene and \"fix\" the potential failures before they happen. Specifically, in our proposed system, named PreFix, we aim to determine during runtime whether a switch failure will happen in the near future. The prediction is based on the measurements of the current switch system status and historical switch hardware failure cases that have been carefully labelled by network operators. Our key observation is that failures of the same switch model share some common syslog patterns before failures occur, and we can apply machine learning methods to extract the common patterns for predicting switch failures. Our novel set of features (message template sequence, frequency, seasonality and surge) for machine learning can efficiently deal with the challenges of noises, sample imbalance, and computation overhead. We evaluated PreFix on a data set collected from 9397 switches (3 different switch models) deployed in more than 20 datacenters owned by a top global search engine in a 2-year period. PreFix achieved an average of 61.81% recall and 1.84 * 10^-5 false positive ratio. It outperforms the other failure prediction methods for computers and ISP devices.", "sha": "d6e2b45820dfee9ac48926884d19a30ebf33820b", "corpus_id": 216145151 }
{ "doc_id": "16280635", "title": "Real-time failure prediction in online services", "abstract": "Current data mining techniques used to create failure predictors for online services require massive amounts of data to build, train, and test the predictors. These operations are tedious, time consuming, and are not done in real-time. Also, the accuracy of the resulting predictor is highly compromised by changes that affect the environment and working conditions of the predictor. We propose a new approach to creating a dynamic failure predictor for online services in real-time and keeping its accuracy high during the services run-time changes. We use synthetic transactions during the run-time lifecycle to generate current data about the service. This data is used in its ephemeral state to build, train, test, and maintain an up-to-date failure predictor. We implemented the proposed approach in a large-scale online ad service that processes billions of requests each month in six data centers distributed in three continents. We show that the proposed predictor is able to maintain failure prediction accuracy as high as 86% during online service changes, whereas the accuracy of the state-of-the-art predictors may drop to less than 10%.", "sha": "5cd3c67905a3aa452a14281c6c2f3976dc86f6aa", "corpus_id": 16280635 }
{ "doc_id": "9989340", "title": "NetPilot: automating datacenter network failure mitigation", "abstract": "Driven by the soaring demands for always-on and fast-response online services, modern datacenter networks have recently undergone tremendous growth. These networks often rely on commodity hardware to reach immense scale while keeping capital expenses under check. The downside is that commodity devices are prone to failures, raising a formidable challenge for network operators to promptly handle these failures with minimal disruptions to the hosted services.\n Recent research efforts have focused on automatic failure localization. Yet, resolving failures still requires significant human interventions, resulting in prolonged failure recovery time. Unlike previous work, NetPilot aims to quickly mitigate rather than resolve failures. NetPilot mitigates failures in much the same way operators do -- by deactivating or restarting suspected offending components. NetPilot circumvents the need for knowing the exact root cause of a failure by taking an intelligent trial-and-error approach. The core of NetPilot is comprised of an Impact Estimator that helps guard against overly disruptive mitigation actions and a failure-specific mitigation planner that minimizes the number of trials. We demonstrate that NetPilot can effectively mitigate several types of critical failures commonly encountered in production datacenter networks.", "sha": "46eba995c5371d7966d59549f61c203cecd1d3c7", "corpus_id": 9989340 }
{ "doc_id": "38285975", "title": "An Overview of Access Control Mechanisms for Internet of Things", "abstract": "The integration of the physical world and the cyber system in IoT brings significant challenges to the design of security solutions. Access control is considered to be a critical system component for the protection of data, cyberinfrastructure, and even the physical systems in IoT; however, because of the new characteristics of IoT systems, such as the resource constraints, the large scale and the device heterogeneity, many traditional security solutions including existing access control mechanisms may not be directly applicable in IoT environment. This paper first presents an overview of the existing access control mechanisms and analyzes their applicability in IoT systems. Then, both the challenges in the IoT access control design and the goals for future IoT access control design are identified and discussed.", "sha": "f25b4da7c97feba605497ffb1f335285906a13c7", "corpus_id": 38285975 }
{ "doc_id": "178219", "title": "Rationale for the RBAC96 family of access control models", "abstract": "A family of role-based access control (RBAC) models, referred to here as the RBAC96 models, was recently published by the author and his colleagues. This paper gives our rationale for the major decisions in developing these models and discusses alternatives that were considered.", "sha": "25a7b5d2db857cd86692c45d0e5376088f51aa12", "corpus_id": 178219 }
{ "doc_id": "16729622", "title": "Smart Lighting Solutions for Smart Cities", "abstract": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "sha": "601b569796631591a90bbd7cea17dfa2ac026651", "corpus_id": 16729622 }
{ "doc_id": "41116722", "title": "A novel waveform design for multi-target detection in automotive FMCW radar", "abstract": "FMCW (Frequency Modulation Continuous Wave) radar has been widely used in active safety systems recently. On account of range-velocity processing, ghost targets and missed targets exist in a multi-target situation. To address those issues, a two-step scheme as well as a novel FMCW waveform have been proposed in this paper. The proposed waveform contains four segments: fast up ramp, slow up ramp, flat frequency and slow down ramp in each period. The approximate range can be detected by using the fast up ramp in the first step, then the unambiguous velocity can be calculated during the flat frequency segment in the second step. The combination of the two independent measurements yields, finally, the real targets and eliminates the ghost targets. In addition, the computational complexity of our proposed scheme is 80% lower than ramps with different slopes method, and the effectiveness of our approach is demonstrated by simulation results.", "sha": "dcd3612179580b6678ec6b1baa0bdd0c5fbca043", "corpus_id": 41116722 }
{ "doc_id": "5325247", "title": "Vehicle Detection Techniques for Collision Avoidance Systems: A Review", "abstract": "Over the past decade, vision-based vehicle detection techniques for road safety improvement have gained an increasing amount of attention. Unfortunately, the techniques suffer from robustness due to huge variability in vehicle shape (particularly for motorcycles), cluttered environment, various illumination conditions, and driving behavior. In this paper, we provide a comprehensive survey in a systematic approach about the state-of-the-art on-road vision-based vehicle detection and tracking systems for collision avoidance systems (CASs). This paper is structured based on a vehicle detection processes starting from sensor selection to vehicle detection and tracking. Techniques in each process/step are reviewed and analyzed individually. Two main contributions in this paper are the following: survey on motorcycle detection techniques and the sensor comparison in terms of cost and range parameters. Finally, the survey provides an optimal choice with a low cost and reliable CAS design in vehicle industries.", "sha": "23a73412b3eb7a2843d87c35efaf20423388c4b3", "corpus_id": 5325247 }
{ "doc_id": "3015325", "title": "Vision-based vehicle detection for nighttime with discriminately trained mixture of weighted deformable part models", "abstract": "Vehicle detection at night time is a challenging problem due to low visibility and light distortion caused by motion and illumination in urban environments. This paper presents a method based on the deformable object model for detecting and classifying vehicles by using monocular infra-red cameras. As some features of vehicles, such as headlight and taillights are more visible at night time, we propose a weighted version of the deformable part model. We define weights for different features in the deformable part model of the vehicle and try to learn the weights through an enormous number of positive and negative samples. Experimental results prove the effectiveness of the algorithm for detecting close and medium range vehicles in urban scenes at night time.", "sha": "ee0954b177abc642d4f08db9f23c72febdc3b686", "corpus_id": 3015325 }
{ "doc_id": "18831288", "title": "The SECI model of knowledge creation : some empirical shortcomings", "abstract": "This paper presents a critique of key empirical aspects of Nonaka’s model of knowledge creation. If, as is now widely accepted, it is important to manage knowledge for a variety of reasons, then it is equally important that we have good models to assist this process. Nonaka and his colleagues’ model, in particular the SECI matrix of knowledge conversion, is increasingly being cited by authors in a widening set of disciplines, and has evidently achieved something like a paradigmatic status. However, as I shall argue below, it is not supported by empirical evidence, and some of the conversion modes are not coherent. The paradigmatic status of this theory may thus be unwarranted.", "sha": "83bec5a7aca2abc2c5ae42c16aba4e6a8f6e5662", "corpus_id": 18831288 }
{ "doc_id": "43382537", "title": "The Knowledge-Creating Company by Ikujiro Nonaka", "abstract": null, "sha": "da2fd9e368f14e0bb713fb6f4d7c32b0f72f6bcf", "corpus_id": 43382537 }
{ "doc_id": "22845005", "title": "Characterization of the operating periods of a power transformer by clustering the dissolved gas data", "abstract": "This paper presents an analysis of the different operating periods of an in-service oil immersed power transformer through dissolved gas concentrations. The unsupervised classification by k-means method allows regrouping the periods of operation into classes using the Euclidean distance as a criterion of similarity. The analyzed data describes the evolution of gas concentrations as a function of time. The classes obtained are characterized by the production activities of the different gases, various operating constraints and the incipient failures. These periods also highlight the maintenance actions carried out on the insulating oil.", "sha": "c661883f01fd0b86179c155e791208b398a79dca", "corpus_id": 22845005 }
{ "doc_id": "4370446", "title": "The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory", "abstract": "Stochastic Gradient Descent (SGD) is a fundamental algorithm in machine learning, representing the optimization backbone for training several classic models, from regression to neural networks. Given the recent practical focus on distributed machine learning, significant work has been dedicated to the convergence properties of this algorithm under the inconsistent and noisy updates arising from execution in a distributed environment. However, surprisingly, the convergence properties of this classic algorithm in the standard shared-memory model are still not well-understood. In this work, we address this gap, and provide new convergence bounds for lock-free concurrent stochastic gradient descent, executing in the classic asynchronous shared memory model, against a strong adaptive adversary. Our results give improved upper and lower bounds on the \"price of asynchrony'' when executing the fundamental SGD algorithm in a concurrent setting. They show that this classic optimization tool can converge faster and with a wider range of parameters than previously known under asynchronous iterations. At the same time, we exhibit a fundamental trade-off between the maximum delay in the system and the rate at which SGD can converge, which governs the set of parameters under which this algorithm can still work efficiently.", "sha": "f806980eb6658ce7a7fba99dabebeb2d43c6dc6b", "corpus_id": 4370446 }
{ "doc_id": "205001834", "title": "Learning representations by back-propagating errors", "abstract": "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.", "sha": "052b1d8ce63b07fec3de9dbb583772d860b7c769", "corpus_id": 205001834 }
{ "doc_id": "33648715", "title": "Transversus abdominis muscle function in humans.", "abstract": "We used a high-resolution ultrasound to make electrical recordings from the transversus abdominis muscle in humans. The behavior of this muscle was then compared with that of the external oblique and rectus abdominis in six normal subjects in the seated posture. During voluntary efforts such as expiration from functional residual capacity, speaking, expulsive maneuvers, and isovolume \"belly-in\" maneuvers, the transversus in general contracted together with the external oblique and the rectus abdominis. In contrast, during hyperoxic hypercapnia, all subjects had phasic expiratory activity in the transversus at ventilations between 10 and 18 l/min, well before activity could be recorded from either the external oblique or the rectus abdominis. Similarly, inspiratory elastic loading evoked transversus expiratory activity in all subjects but external oblique activity in only one subject and rectus abdominis activity in only two subjects. We thus conclude that in humans 1) the transversus abdominis is recruited preferentially to the superficial muscle layer of the abdominal wall during breathing and 2) the threshold for abdominal muscle recruitment during expiration is substantially lower than conventionally thought.", "sha": "abece381c902864a63dcd4ba933dce88feaee7b3", "corpus_id": 33648715 }
{ "doc_id": "14804485", "title": "Wearable Camera- and Accelerometer-Based Fall Detection on Portable Devices", "abstract": "Robust and reliable detection of falls is crucial especially for elderly activity monitoring systems. In this letter, we present a fall detection system using wearable devices, e.g., smartphones, and tablets, equipped with cameras and accelerometers. Since the portable device is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may travel, as opposed to static sensors installed in certain rooms. Moreover, a camera provides an abundance of information, and the results presented here show that fusing camera and accelerometer data not only increases the detection rate, but also decreases the number of false alarms compared to only accelerometer-based or only camera-based systems. We employ histograms of edge orientations together with the gradient local binary patterns for the camera-based part of fall detection. We compared the performance of the proposed method with that of using original histograms of oriented gradients (HOG) as well as a modified version of HOG. Experimental results show that the proposed method outperforms using original HOG and modified HOG, and provides lower false positive rates for the camera-based detection. Moreover, we have employed an accelerometer-based fall detection method, and fused these two sensor modalities to have a robust fall detection system. Experimental results and trials with actual Samsung Galaxy phones show that the proposed method, combining two different sensor modalities, provides much higher sensitivity, and a significant decrease in the number of false positives during daily activities, compared to accelerometer-only and camera-only methods.", "sha": "e8574b6615da73871de05b5db9a41dec1b7ba891", "corpus_id": 14804485 }
{ "doc_id": "1991617", "title": "On Combining Classifiers", "abstract": "We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions—the sum rule—outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically.", "sha": "54801c260df5221a9de533d371d3edcc358b4050", "corpus_id": 1991617 }
{ "doc_id": "11377451", "title": "A Thermal Energy Harvesting Power Supply With an Internal Startup Circuit for Pacemakers", "abstract": "A complete thermal energy harvesting power supply for implantable pacemakers is presented in this paper. The designed power supply includes an internal startup and does not need any external reference voltage. The startup circuit includes a prestartup charge pump (CP) and a startup boost converter. The prestartup CP consists of an ultralow-voltage oscillator followed by a high-efficiency modified Dickson. Forward body biasing is used to effectively reduce the MOS threshold voltages as well as the supply voltage in oscillator and CP. The steady-state circuit includes a high-efficiency boost converter that utilizes a modified maximum powerpoint tracking scheme. The system is designed so that no failure occurs under overload conditions. Using this approach, a thermal energy harvesting power supply has been designed using 180-nm CMOS technology. According to HSPICE simulation results, the circuit operates from input voltages as low as 40 mV provided from a thermoelectric generator and generates output voltages up to 3 V. A maximum power of 130 μW can be obtained from the output of the boost converter, which means that its efficiency is 60%. A minimum voltage of 60 mV and a maximum time of 400 ms are needed for the circuit to start up.", "sha": "9760b0365c419accae5b3f2d17ec7319c073d108", "corpus_id": 11377451 }
{ "doc_id": "14150239", "title": "Sparse Models of Natural Language Text", "abstract": "In statistical text analysis, many learning problems can be formulated as a minimization of a sum of a loss function and a regularization function for a vector of parameters (feature coefficients). The loss function drives the model to learn generalizable patterns from the training data, whereas the regularizer plays two important roles: to prevent the models from capturing idiosyncrasies of the training data (overfitting) and to encode prior knowledge about the model parameters. When learning from high-dimensional data such as text, it has been empirically observed that relatively few dimensions are relevant to the predictive task (Forman, 2003). How can we capitalize on this insight and choose which dimensions are relevant in an informed and principled manner? Sparse regularizers provide a way to select relevant dimensions by means of regularization. However, past work rarely encodes non-trivial prior knowledge that yields sparse solutions through a regularization function. This thesis investigates the applications of sparse models—especially structured sparse models—as a medium to encode linguistically-motivated prior knowledge in textual models to advance NLP systems. We explore applications of sparse NLP models in text categorization, word embeddings, and temporal models of text. Sparse models come with their own challenges, since new instantiations of sparse models often require a specialized optimization method. This thesis also presents optimization methods for the proposed instantiations of sparse models. Therefore, the goals of this thesis are twofold: (i) to show how sparsity can be used to encode linguistic information in statistical text models, and (ii) to develop efficient learning algorithms to solve the resulting optimization problems.", "sha": "125075db7b38b72e7cf226a01dd68e3908a527b9", "corpus_id": 14150239 }
{ "doc_id": "15497435", "title": "Moreau-Yosida Regularization for Grouped Tree Structure Learning", "abstract": "We consider the tree structured group Lasso where the structure over the features can be represented as a tree with leaf nodes as features and internal nodes as clusters of the features. The structured regularization with a pre-defined tree structure is based on a group-Lasso penalty, where one group is defined for each node in the tree. Such a regularization can help uncover the structured sparsity, which is desirable for applications with some meaningful tree structures on the features. However, the tree structured group Lasso is challenging to solve due to the complex regularization. In this paper, we develop an efficient algorithm for the tree structured group Lasso. One of the key steps in the proposed algorithm is to solve the Moreau-Yosida regularization associated with the grouped tree structure. The main technical contributions of this paper include (1) we show that the associated Moreau-Yosida regularization admits an analytical solution, and (2) we develop an efficient algorithm for determining the effective interval for the regularization parameter. Our experimental results on the AR and JAFFE face data sets demonstrate the efficiency and effectiveness of the proposed algorithm.", "sha": "87dfc07b994cfc4b7280602801bdb8a95e2d908f", "corpus_id": 15497435 }
{ "doc_id": "13392386", "title": "A Survey of Context-Aware Mobile Recommendations", "abstract": "Mobile recommender systems target on recommending the right product or information to the right mobile users at anytime and anywhere. It is well known that the contextual information is often the key for the performances of mobile recommendations. Therefore, in this paper, we provide a focused survey of the recent development of context-aware mobile recommendations. After brie°y reviewing the state-of-the-art of recommender systems, we ̄rst discuss the general notion of mobile context and how the contextual information is collected. Then, we introduce the existing approaches to exploit contextual information for modeling mobile recommendations. Furthermore, we summarize several existing recommendation tasks in the mobile scenarios, such as the recommendations in the tourism domain. Finally, we discuss some key issues that are still critical in the ̄eld of context-aware mobile recommendations, including the privacy problem, the energy e±ciency issues, and the design of user interfaces.", "sha": "afd883ea35ba4df3ffbc9e564f453ee956fce05f", "corpus_id": 13392386 }
{ "doc_id": "8598536", "title": "Interpretation and identification of causal mediation.", "abstract": "This article reviews the foundations of causal mediation analysis and offers a general and transparent account of the conditions necessary for the identification of natural direct and indirect effects, thus facilitating a more informed judgment of the plausibility of these conditions in specific applications. I show that the conditions usually cited in the literature are overly restrictive and can be relaxed substantially without compromising identification. In particular, I show that natural effects can be identified by methods that go beyond standard adjustment for confounders, applicable to observational studies in which treatment assignment remains confounded with the mediator or with the outcome. These identification conditions can be validated algorithmically from the diagrammatic description of one's model and are guaranteed to produce unbiased results whenever the description is correct. The identification conditions can be further relaxed in parametric models, possibly including interactions, and permit one to compare the relative importance of several pathways, mediated by interdependent variables.", "sha": "20fb050452107548185580adfe9f1ed521c30532", "corpus_id": 8598536 }
{ "doc_id": "10757981", "title": "Identifiability and exchangeability for direct and indirect effects.", "abstract": "We consider the problem of separating the direct effects of an exposure from effects relayed through an intermediate variable (indirect effects). We show that adjustment for the intermediate variable, which is the most common method of estimating direct effects, can be biased. We also show that even in a randomized crossover trial of exposure, direct and indirect effects cannot be separated without special assumptions; in other words, direct and indirect effects are not separately identifiable when only exposure is randomized. If the exposure and intermediate never interact to cause disease and if intermediate effects can be controlled, that is, blocked by a suitable intervention, then a trial randomizing both exposure and the intervention can separate direct from indirect effects. Nonetheless, the estimation must be carried out using the G-computation algorithm. Conventional adjustment methods remain biased. When exposure and the intermediate interact to cause disease, direct and indirect effects will not be separable even in a trial in which both the exposure and the intervention blocking intermediate effects are randomly assigned. Nonetheless, in such a trial, one can still estimate the fraction of exposure-induced disease that could be prevented by control of the intermediate. Even in the absence of an intervention blocking the intermediate effect, the fraction of exposure-induced disease that could be prevented by control of the intermediate can be estimated with the G-computation algorithm if data are obtained on additional confounding variables.", "sha": "d158680063539d86a18b1d47c8b59df375bd6b9c", "corpus_id": 10757981 }
{ "doc_id": "10899515", "title": "PARA-SNORT : A MULTI-THREAD SNORT ON MULTI-CORE IA PLATFORM", "abstract": "As security threats and network bandwidth increase in a very fast pace, there is a growing interest in designing highperformance network intrusion detection system (NIDS). This paper presents a parallelization strategy for the popular open-source Snort to build a high performance NIDS on multi-core IA platform. A modular design of parallel NIDS based on Snort is proposed in this paper. Named Para-Snort, it enables flexible and easy module design. This paper also analyzed the performance impact of load balancing and multi-pattern matching. Modified-JSQ and AC-WM algorithms are implemented in order to resolve the bottlenecks and improve the performance of the system. Experimental results show that Para-Snort achieves significant speedup of 4 to 6 times for various traces with a 7-thread parallelizing test setup.", "sha": "ab70bdba9fc2085d0683bad28f096be7a1dc9950", "corpus_id": 10899515 }
{ "doc_id": "2198026", "title": "Multiresolution Support for Adaptive Image Restoration Using Neural Networks", "abstract": "Contents Preface vii 1 The wavelet transform 1 i ii CONTENTS 1.6.3 Conclusions on non-wavelet multiresolution approaches 50 2 Multiresolution support and filtering 51 2.", "sha": "af41250ead44668ad1c88eb9975e678d8a930a0f", "corpus_id": 2198026 }
{ "doc_id": "327477", "title": "Wavelets for Computer Graphics : A Primer Part 1 y", "abstract": "Wavelets are a mathematical tool for hierarchically decomposing functions. They allow a function to be described in terms of a coarse overall shape, plus details that range from broad to narrow. Regardless of whether the function of interest is an image, a curve, or a surface, wavelets offer an elegant technique for representing the levels of detail present. This primer is intended to provide people working in computer graphics with some intuition for what wavelets are, as well as to present the mathematical foundations necessary for studying and using them. In Part 1, we discuss the simple case of Haar wavelets in one and two dimensions, and show how they can be used for image compression. In Part 2, we will present the mathematical theory of multiresolution analysis, then develop spline wavelets and describe their use in multiresolution curve and surface editing.", "sha": "489c6e864505eeeea4ab9f97f7f7c02febc02b5d", "corpus_id": 327477 }
{ "doc_id": "46332962", "title": "Sentiment Analysis Using Deep Learning Techniques: A Review", "abstract": "The World Wide Web such as social networks, forums, review sites and blogs generate enormous heaps of data in the form of users views, emotions, opinions and arguments about different social events, products, brands, and politics. Sentiments of users that are expressed on the web has great influence on the readers, product vendors and politicians. The unstructured form of data from the social media is needed to be analyzed and well-structured and for this purpose, sentiment analysis has recognized significant attention. Sentiment analysis is referred as text organization that is used to classify the expressed mind-set or feelings in different manners such as negative, positive, favorable, unfavorable, thumbs up, thumbs down, etc. The challenge for sentiment analysis is lack of sufficient labeled data in the field of Natural Language Processing (NLP). And to solve this issue, the sentiment analysis and deep learning techniques have been merged because deep learning models are effective due to their automatic learning capability. This Review Paper highlights latest studies regarding the implementation of deep learning models such as deep neural networks, convolutional neural networks and many more for solving different problems of sentiment analysis such as sentiment classification, cross lingual problems, textual and visual analysis and product review analysis, etc. Keywords—Sentiment analysis; recurrent neural network; deep neural network; convolutional neural network; recursive neural network; deep belief network", "sha": "889224a64a5bc5f9e965f418a63b6768f7164993", "corpus_id": 46332962 }
{ "doc_id": "36556565", "title": "How Do You Want Your Chatbot? An Exploratory Wizard-of-Oz Study with Young, Urban Indians", "abstract": "As text-messaging chatbots become increasingly “human”, it will be important to understand the personal interactions that users are seeking with a chatbot. What chatbot personalities are most compelling to young, urban users in India? To explore this question, we first conducted exploratory Wizard-of-Oz (WoZ) studies with 14 users that simulated interactions with a hypothetical chatbot. We evaluated three personalities for the chatbot—Maya, a productivity oriented bot with nerd wit; Ada, a fun, flirtatious bot; and Evi, an emotional buddy bot. We followed up with one-on-one interviews with the users discussing their experiences with each of the chatbots, what they liked, and what they did not. Overall our results show that users wanted a chatbot like Maya, who could add value to their life while being a friend, by making useful recommendations. But they also wanted preferred traits of Ada and Evi infused into Maya.", "sha": "e24b275073915d008038294f5ee98a625259ce7c", "corpus_id": 36556565 }
{ "doc_id": "39404420", "title": "A Model of a Social Chatbot", "abstract": null, "sha": "bcf89da9cde7a647db2b63727cac6846fbd9c001", "corpus_id": 39404420 }
{ "doc_id": "3738690", "title": "Microcrystalline cellulose, a direct compression binder in a quality by design environment--a review.", "abstract": "The ICH quality vision introduced the concept of quality by design (QbD), which requires a greater understanding of the raw material attributes, of process parameters, of their variability and their interactions. Microcrystalline cellulose (MCC) is one of the most important tableting excipients thanks to its outstanding dry binding properties, enabling the manufacture of tablets by direct compression (DC). DC remains the most economical technique to produce large batches of tablets, however its efficacy is directly impacted by the raw material attributes. Therefore excipients' variability and their impact on drug product performance need to be thoroughly understood. To help with this process, this review article gathers prior knowledge on MCC, focuses on its use in DC and lists some of its potential critical material attributes (CMAs).", "sha": "af93657064273c2e0f1a5be091b9eb93405ea105", "corpus_id": 3738690 }
{ "doc_id": "216898416", "title": "On the Interplay Between Cyber and Physical Spaces for Adaptive Security", "abstract": "Ubiquitous computing is resulting in a proliferation of cyber-physical systems that host or manage valuable physical and digital assets. These assets can be harmed by malicious agents through both cyber-enabled or physically-enabled attacks, particularly ones that exploit the often ignored interplay between the cyber and physical world. The explicit representation of spatial topology is key to supporting adaptive security policies. In this paper we explore the use of Bigraphical Reactive Systems to model the topology of cyber and physical spaces and their dynamics. We utilise such models to perform speculative threat analysis through model checking to reason about the consequences of the evolution of topological configurations on the satisfaction of security requirements. We further propose an automatic planning technique to identify an adaptation strategy enacting security policies at runtime to prevent, circumvent, or mitigate possible security requirements violations. We evaluate our approach using a case study concerned with countering insider threats in a building automation system.", "sha": "5aa0b0321fac70f9ad8c35068e41f4a241cdf9fb", "corpus_id": 216898416 }
{ "doc_id": "14659753", "title": "A Systematic Survey of Self-Protecting Software Systems", "abstract": "Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts.", "sha": "052254b6b576e65f07e2b4f5d765f597fbe0da3f", "corpus_id": 14659753 }
{ "doc_id": "18263349", "title": "Performance measurement for green supply chain management", "abstract": "Purpose – To introduce and provide an overview of the various issues related to environmental (green) supply chain management performance measurement. Design/methodology/approach – The work relies on experiences, case studies and other literature related to performance measurement in environmental supply chains. It seeks to integrate works in supply chain management, environmental management, and performance management into one framework. A systems framework forms the discussion outline with a focus on controls/pressures, inputs, tools, and outputs as major categories for evaluation and review. Findings – Provides an integrative framework for study, design and evaluation of green supply chain management performance tools. The findings also identify a number of issues that need to still be addressed. Research limitations/implications – We have only one design of the issues in which numerous categorizations could be provided. There is limited research in this area and new and current models/developments can provide additional insight. Implications of the work is that these gaps exist and that significantly more work needs to be completed in this field. Practical implications – A very useful source of information for practitioners that seek to implement these systems within and between organizations. Also, the paper provides numerous areas which researchers could complete additional research and develop research agendas. Originality/value – This paper provides some of the very first insights into development of a green supply chain management performance measurement system. Typically performance measurement systems are internally and business focused, we expand on these issues by considering inter-organizational and environmental issues within a business context.", "sha": "a2003dcaf9c3d75c3179460c9e2d8fe915ae26d2", "corpus_id": 18263349 }
{ "doc_id": "51752910", "title": "Robust Representation Learning for Low Resource Languages", "abstract": "Understanding the meaning of words is essential for most natural language processing tasks. Word representations are means to mathematically represent the meaning of a word in a way that computers can understand. These representations are often in the form of vectors in which words are represented in a continuous vector-space of fixed dimensionality also referred to as word embeddings. In this thesis, we focus on generating better and reliable word representations for low resource languages. Many languages, though widely spoken, are largely under-represented in this area of research. One of the main reasons for this is the lack of reliable evaluation metrics to compare between different approaches of building these embeddings. Word similarity task is a widely used, computationally efficient method to directly evaluate the quality of word vectors. It relies on finding correlation between human assigned similarities between words, and those between corresponding word vectors. We release word similarity datasets for six low resource languages – Urdu, Telugu, Marathi, Punjabi, Tamil and Gujarati. For the construction of these datasets, our approach relies on translation and re-annotation of word similarity datasets of English. We also present baseline scores for word representation models using state-of-the-art techniques for Urdu, Telugu and Marathi by evaluating them on newly created word similarity datasets. For linguistically similar languages, we show that it is possible to use the better trained word representations of the more resourceful language for the other language in the pair, using a projection learning approach which relies on a mapping between words having similar meaning from the two languages. This cross-lingual vector space transformation results in state of the art results on word similarity test sets of French and German an increase of 13% in case of French and 19% for German, using English as the source language. We also go on to demonstrate that this approach is better suited for linguistically similar language pairs like Hindi-Urdu (where 60% words are simply transliterations of each other) than English-German or English-French. We go on to see how we modelled prefix-suffix based morphoology using a similar technique.", "sha": "95845fa4046f75df4fc83290ea4431d4a5730f28", "corpus_id": 51752910 }
{ "doc_id": "9858133", "title": "Exploiting Morphological Regularities in Distributional Word Representations", "abstract": "We present an unsupervised, language agnostic approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set (Mikolov et al., 2013d) with an accuracy of 85% which is 12% higher than the previous best known system.", "sha": "0db01127c3a919df8891d85c270109894c67e3f8", "corpus_id": 9858133 }
{ "doc_id": "53796860", "title": "Learning distributed representations of concepts", "abstract": null, "sha": "4ade4934db522fe6d634ff6f48887da46eedb4d1", "corpus_id": 53796860 }
{ "doc_id": "10606141", "title": "Learning semantic relationships for better action retrieval in images", "abstract": "Human actions capture a wide variety of interactions between people and objects. As a result, the set of possible actions is extremely large and it is difficult to obtain sufficient training examples for all actions. However, we could compensate for this sparsity in supervision by leveraging the rich semantic relationship between different actions. A single action is often composed of other smaller actions and is exclusive of certain others. We need a method which can reason about such relationships and extrapolate unobserved actions from known actions. Hence, we propose a novel neural network framework which jointly extracts the relationship between actions and uses them for training better action retrieval models. Our model incorporates linguistic, visual and logical consistency based cues to effectively identify these relationships. We train and test our model on a largescale image dataset of human actions. We show a significant improvement in mean AP compared to different baseline methods including the HEX-graph approach from Deng et al. [8].", "sha": "343381065bbad97662e393293320d0556badba5d", "corpus_id": 10606141 }
{ "doc_id": "372467", "title": "Large Scale Distributed Deep Networks", "abstract": "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestlysized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.", "sha": "0122e063ca5f0f9fb9d144d44d41421503252010", "corpus_id": 372467 }
{ "doc_id": "9053969", "title": "Performance evaluation and model checking join forces", "abstract": "A call for the perfect marriage between classical performance evaluation and state-of-the-art verification techniques.", "sha": "5df4c94a9b191a5041dd0efd5bfb07ff263d530a", "corpus_id": 9053969 }
{ "doc_id": "17045780", "title": "A Robust and Efficient Approach to License Plate Detection", "abstract": "This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of $1082\\times 728$ . The executable code and our collected data set are publicly available.", "sha": "3988360c7c61688c9f1753caae1d32db2666ae82", "corpus_id": 17045780 }
{ "doc_id": "3116168", "title": "LIBLINEAR: A Library for Large Linear Classification", "abstract": "LIBLINEAR is an open source library for large-scale linear classification. It supports logistic regression and linear support vector machines. We provide easy-to-use command-line tools and library calls for users and developers. Comprehensive documents are available for both beginners and advanced users. Experiments demonstrate that LIBLINEAR is very efficient on large sparse data sets.", "sha": "50e983fd06143cad9d4ac75bffc2ef67024584f2", "corpus_id": 3116168 }
{ "doc_id": "15947010", "title": "Exploring Crowd Consistency in a Mechanical Turk Survey", "abstract": "Crowdsourcing can provide a platform for evaluating software engineering research. In this paper, we aim to explore characteristics of the worker population on Amazon's Mechanical Turk, a popular microtask crowdsourcing environment, and measure the percentage of workers who are potentially qualified to perform software- or computer science-related tasks. Through a baseline survey and two replications, we measure workers' answer consistency as well as the consistency of sample characteristics. In the end, we deployed 1,200 total surveys that were completed by 1,064 unique workers. Our results show that 24% of the study participants have a computer science or IT background and most people are payment driven when choosing tasks. The sample characteristics can vary significantly, even on large samples with 300 participants. Additionally, we often observed inconsistency in workers' answers for those who completed two surveys; approximately 30% answered at least one question inconsistently between the two survey submissions. This implies a need for replication and quality controls in crowdsourced experiments.", "sha": "7aa74e04e1d5f57275f5201b59bf40f5268e6470", "corpus_id": 15947010 }
{ "doc_id": "27103450", "title": "Mapping Agile Practices to CMMI-DEV Level 3 in Web Development Environments", "abstract": "Agile approaches formally appeared ten years ago and nowadays are a valid alternative for organizations developing software. Agile methodologies are especially interesting to those developing Web environments applications, as they can fit properly the special characteristics of this type of developments. In addition, maturity models like CMMI-DEV (Capability Maturity Model Integration for Development) focus on assessing the maturity level of organizations developing or acquiring software. These models are well established and can increment quality of development processes to enhance costumers’ satisfaction. CMMI-DEV level 3 provides a good compromise on maturity gained and effort needed. The feasibility of getting it through a combination of Agile methods can be very useful to organizations developing systems in Web environments, as they can keep the adaptability of Agile together with a more mature development process. This paper proposes a set of Agile methods so as to reach all CMMI-DEV maturity level 3 generic and specific goals. Based on this analysis, the paper proposes further research lines.", "sha": "be3bd869edd8d53c2730e9877e84dd919012a77d", "corpus_id": 27103450 }
{ "doc_id": "7467659", "title": "The impact of agile practices on communication in software development", "abstract": "Agile software development practices such as eXtreme Programming (XP) and SCRUM have increasingly been adopted to respond to the challenges of volatile business environments, where the markets and technologies evolve rapidly and present the unexpected. In spite of the encouraging results so far, little is known about how agile practices affect communication. This article presents the results from a study which examined the impact of XP and SCRUM practices on communication within software development teams and within the focal organization. The research was carried out as a case study in F-Secure where two agile software development projects were compared from the communication perspective. The goal of the study is to increase the understanding of communication in the context of agile software development: internally among the developers and project leaders and in the interface between the development team and stakeholders (i.e. customers, testers, other development teams). The study shows that agile practices improve both informal and formal communication. However, it further indicates that, in larger development situations involving multiple external stakeholders, a mismatch of adequate communication mechanisms can sometimes even hinder the communication. The study highlights the fact that hurdles and improvements in the communication process can both affect the feature requirements and task subtask dependencies as described in coordination theory. While the use of SCRUM and some XP practices facilitate team and organizational communication of the dependencies between product features and working tasks, the use of agile practices requires that the team and organization use also additional plan-driven practices to ensure the efficiency of external communication between all the actors of software development.", "sha": "d7b9dde9a7d304b378079049a0c2af40454a13bb", "corpus_id": 7467659 }
{ "doc_id": "34765190", "title": "Factors that relate to good and poor handwriting.", "abstract": "OBJECTIVE\nThis study investigated the relationships between specific performance components, eye-hand coordination, visuomotor integration, in-hand manipulation, and handwriting skill.\n\n\nMETHOD\nA sample of 48 typical first grade students were identified as good and poor handwriters by their teachers. Each child completed the Motor Accuracy Test; the Developmental Test of Visual-Motor Integration (VMI); two tests of in-hand manipulation, including a rotation and a translation task; and the Minnesota Handwriting Test (MHT).\n\n\nRESULTS\nAll test scores for the subjects with good handwriting were significantly higher than those of the subjects with poor handwriting. Each performance component test was significantly correlated to MHT scores. Translation, VMI, and rotation scores were significant predictors of MHT scores, accounting for almost 73% of variance. A discriminant analysis using the performance components correctly classified 98% of the students as good or poor handwriters.\n\n\nCONCLUSION\nIn-hand manipulation has significant association to handwriting skill.", "sha": "a37aeb94b56be9943198cb57021251cc0668ef3d", "corpus_id": 34765190 }
{ "doc_id": "14777847", "title": "A zero density change phase change memory material: GeTe-O structural characteristics upon crystallisation", "abstract": "Oxygen-doped germanium telluride phase change materials are proposed for high temperature applications. Up to 8 at.% oxygen is readily incorporated into GeTe, causing an increased crystallisation temperature and activation energy. The rhombohedral structure of the GeTe crystal is preserved in the oxygen doped films. For higher oxygen concentrations the material is found to phase separate into GeO2 and TeO2, which inhibits the technologically useful abrupt change in properties. Increasing the oxygen content in GeTe-O reduces the difference in film thickness and mass density between the amorphous and crystalline states. For oxygen concentrations between 5 and 6 at.%, the amorphous material and the crystalline material have the same density. Above 6 at.% O doping, crystallisation exhibits an anomalous density change, where the volume of the crystalline state is larger than that of the amorphous. The high thermal stability and zero-density change characteristic of Oxygen-incorporated GeTe, is recommended for efficient and low stress phase change memory devices that may operate at elevated temperatures.", "sha": "3840690455481b3ecd0c6fba278d02768e54e223", "corpus_id": 14777847 }
{ "doc_id": "3105143", "title": "Phase-change random access memory: A scalable technology", "abstract": "random access memory: A scalable technology S. Raoux G. W. Burr M. J. Breitwisch C. T. Rettner Y.-C. Chen R. M. Shelby M. Salinga D. Krebs S.-H. Chen H.-L. Lung C. H. Lam Nonvolatile RAM using resistance contrast in phase-change materials [or phase-change RAM (PCRAM)] is a promising technology for future storage-class memory. However, such a technology can succeed only if it can scale smaller in size, given the increasingly tiny memory cells that are projected for future technology nodes (i.e., generations). We first discuss the critical aspects that may affect the scaling of PCRAM, including materials properties, power consumption during programming and read operations, thermal cross-talk between memory cells, and failure mechanisms. We then discuss experiments that directly address the scaling properties of the phase-change materials themselves, including studies of phase transitions in both nanoparticles and ultrathin films as a function of particle size and film thickness. This work in materials directly motivated the successful creation of a series of prototype PCRAM devices, which have been fabricated and tested at phase-change material cross-sections with extremely small dimensions as low as 3 nm · 20 nm. These device measurements provide a clear demonstration of the excellent scaling potential offered by this technology, and they are also consistent with the scaling behavior predicted by extensive device simulations. Finally, we discuss issues of device integration and cell design, manufacturability, and reliability.", "sha": "fae8a785260ac5c34be82fca92a4abef4c30d655", "corpus_id": 3105143 }
{ "doc_id": "13962235", "title": "Image Popularity Prediction in Social Media Using Sentiment and Context Features", "abstract": "Images in social networks share different destinies: some are going to become popular while others are going to be completely unnoticed. In this paper we propose to use visual sentiment features together with three novel context features to predict a concise popularity score of social images. Experiments on large scale datasets show the benefits of proposed features on the performance of image popularity prediction. Exploiting state-of-the-art sentiment features, we report a qualitative analysis of which sentiments seem to be related to good or poor popularity. To the best of our knowledge, this is the first work understanding specific visual sentiments that positively or negatively influence the eventual popularity of images.", "sha": "5ebc4007f2c83ff9ff70f986b2df58f9dcbafaa5", "corpus_id": 13962235 }
{ "doc_id": "13344138", "title": "Systematic Derivation of Behaviour Characterisations in Evolutionary Robotics", "abstract": "Evolutionary techniques driven by behavioural diversity, such as novelty search, have shown significant potential in evolutionary robotics. These techniques rely on priorly specified behaviour characterisations to estimate the similarity between individuals. Characterisations are typically defined in an ad hoc manner based on the experimenter’s intuition and knowledge about the task. Alternatively, generic characterisations based on the sensor-effector values of the agents are used. In this paper, we propose a novel approach that allows for systematic derivation of behaviour characterisations for evolutionary robotics, based on a formal description of the agents and their environment. Systematically derived behaviour characterisations (SDBCs) go beyond generic characterisations in that they can contain task-specific features related to the internal state of the agents, environmental features, and relations between them. We evaluate SDBCs with novelty search in three simulated collective robotics tasks. Our results show that SDBCs yield a performance comparable to the task-specific characterisations, in terms of both solution quality and behaviour space exploration.", "sha": "df207ae851616a3d2cf9126cf3f7327353136872", "corpus_id": 13344138 }
{ "doc_id": "15416390", "title": "Fitness functions in evolutionary robotics: A survey and analysis", "abstract": "This paper surveys fitness functions used in the field of evolutionary robotics (ER). Evolutionary robotics is a field of research that applies artificial evolution to generate control systems for autonomous robots. During evolution, robots attempt to perform a given task in a given environment. The controllers in the better performing robots are selected, altered and propagated to perform the task again in an iterative process that mimics some aspects of natural evolution. A key component of this process – one might argue, the key component – is the measurement of fitness in the evolving controllers. ER is one of a host of machine learning methods that rely on interaction with, and feedback from, a complex dynamic environment to drive synthesis of controllers for autonomous agents. These methods have the potential to lead to the development of robots that can adapt to uncharacterized environments and which may be able to perform tasks that human designers do not completely understand. In order to achieve this, issues regarding fitness evaluation must be addressed. In this paper we survey current ER research and focus on work that involved real robots. The surveyed research is organized according to the degree of a priori knowledge used to formulate the various fitness functions employed during evolution. The underlying motivation for this is to identify methods that allow the development of the greatest degree of novel control, while requiring the minimum amount of a priori task knowledge from the designer. © 2008 Elsevier B.V. All rights reserved.", "sha": "454b8a86551375c4002fe2d2d9bc89a40da33a9c", "corpus_id": 15416390 }
{ "doc_id": "2451234", "title": "Development of a lightweight, underactuated exoskeleton for load-carrying augmentation", "abstract": "Metabolic studies have shown that there is a metabolic cost associated with carrying load. Several leg exoskeletons have been developed by various groups in an attempt to augment the load carrying capability of the human. Previous research efforts have not fully exploited the passive dynamics of walking and have largely focused on fully actuated exoskeletons that are heavy with large energy requirements. In this paper, a lightweight, underactuated exoskeleton design is presented that runs in parallel to the human and supports the weight of a payload. Two exoskeleton architectures are pursued based on examining human walking data. A first architecture consists of springs at the hip, a variable impedance device at the knee, and springs at the ankle. A second architecture replaces the springs at the hip with a non-conservative actuator to examine the effect of adding power at desired instances throughout the gait cycle. Preliminary studies show that an efficient, underactuated leg exoskeleton can effectively transmit payload forces to the ground during the walking cycle", "sha": "791772e2a4c5ac4000561c0b9cada38f7ff738e5", "corpus_id": 2451234 }
{ "doc_id": "49559663", "title": "Amanuensis: The Programmer's Apprentice", "abstract": "This document provides an overview of the material covered in a course taught at Stanford in the spring quarter of 2018. The course draws upon insight from cognitive and systems neuroscience to implement hybrid connectionist and symbolic reasoning systems that leverage and extend the state of the art in machine learning by integrating human and machine intelligence. As a concrete example we focus on digital assistants that learn from continuous dialog with an expert software engineer while providing initial value as powerful analytical, computational and mathematical savants. Over time these savants learn cognitive strategies (domain-relevant problem solving skills) and develop intuitions (heuristics and the experience necessary for applying them) by learning from their expert associates. By doing so these savants elevate their innate analytical skills allowing them to partner on an equal footing as versatile collaborators — effectively serving as cognitive extensions and digital prostheses, thereby amplifying and emulating their human partner’s conceptually-flexible thinking patterns and enabling improved access to and control over powerful computing resources. 0Affiliations: 1Google Research, 2Stanford University ar X iv :1 80 7. 00 08 2v 2 [ qbi o. N C ] 8 N ov 2 01 8", "sha": "7750036de706d790ca70de1a2ae40327b6394264", "corpus_id": 49559663 }
{ "doc_id": "8912485", "title": "Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia", "abstract": "The prefrontal cortex has long been thought to subserve both working memory (the holding of information online for processing) and executive functions (deciding how to manipulate working memory and perform processing). Although many computational models of working memory have been developed, the mechanistic basis of executive function remains elusive, often amounting to a homunculus. This article presents an attempt to deconstruct this homunculus through powerful learning mechanisms that allow a computational model of the prefrontal cortex to control both itself and other brain areas in a strategic, task-appropriate manner. These learning mechanisms are based on subcortical structures in the midbrain, basal ganglia, and amygdala, which together form an actor-critic architecture. The critic system learns which prefrontal representations are task relevant and trains the actor, which in turn provides a dynamic gating mechanism for controlling working memory updating. Computationally, the learning mechanism is designed to simultaneously solve the temporal and structural credit assignment problems. The model's performance compares favorably with standard backpropagation-based temporal learning mechanisms on the challenging 1-2-AX working memory task and other benchmark working memory tasks.", "sha": "01cbcef8f0608701d5975be282038a2ff61de401", "corpus_id": 8912485 }
{ "doc_id": "14017242", "title": "An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global Numerical Optimization", "abstract": "Differential evolution (DE) is one of the most powerful stochastic real parameter optimizers of current interest. In this paper, we propose a new mutation strategy, a fitness- induced parent selection scheme for the binomial crossover of DE, and a simple but effective scheme of adapting two of its most important control parameters with an objective of achieving improved performance. The new mutation operator, which we call DE/current-to-gr_best/1, js a variant of the classical DE/current-to-best/1 scheme. It uses the best of a group (whose size is q% of the population size) of randomly selected solutions from current generation to perturb the parent (target) vector, unlike DE/current-to-best/1 that always picks the best vector of the entire population to perturb the target vector. In our modified framework of recombination, a biased parent selection scheme has been incorporated by letting each mutant undergo the usual binomial crossover with one of the p top-ranked individuals from the current population and not with the target vector with the same index as used in all variants of DE. A DE variant obtained by integrating the proposed mutation, crossover, and parameter adaptation strategies with the classical DE framework (developed in 1995) is compared with two classical and four state-of-the-art adaptive DE variants over 25 standard numerical benchmarks taken from the IEEE Congress on Evolutionary Computation 2005 competition and special session on real parameter optimization. Our comparative study indicates that the proposed schemes improve the performance of DE by a large magnitude such that it becomes capable of enjoying statistical superiority over the state-of-the-art DE variants for a wide variety of test problems. Finally, we experimentally demonstrate that, if one or more of our proposed strategies are integrated with existing powerful DE variants such as jDE and JADE, their performances can also be enhanced.", "sha": "067624898c070c2ed7304751610b03afbe88139b", "corpus_id": 14017242 }
{ "doc_id": "11002479", "title": "Design of a Q-learning-based client quality selection algorithm for HTTP adaptive video streaming", "abstract": "Over the past decades, the importance of multimedia services such as video streaming has increased considerably. HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for adaptive video streaming services. In HAS, a video is split into multiple segments and encoded at multiple quality levels. State-of-the-art HAS clients employ deterministic heuristics to dynamically adapt the requested quality level based on the perceived network and device conditions. Current HAS client heuristics are however hardwired to fit specific network configurations, making them less flexible to fit a vast range of settings. In this article, an adaptive Q-Learning-based HAS client is proposed. In contrast to existing heuristics, the proposed HAS client dynamically learns the optimal behavior corresponding to the current network environment. Considering multiple aspects of video quality, a tunable reward function has been constructed, giving the opportunity to focus on different aspects of the Quality of Experience, the quality as perceived by the end-user. The proposed HAS client has been thoroughly evaluated using a network-based simulator, investigating multiple reward configurations and Reinforcement Learning specific settings. The evaluations show that the proposed client can outperform standard HAS in the evaluated networking environments.", "sha": "f5be41107054cc6bf185939f63f624959a08112e", "corpus_id": 11002479 }
{ "doc_id": "17012697", "title": "EvalVid - A Framework for Video Transmission and Quality Evaluation", "abstract": "With EvalVid we present a complete framework and tool-set for evaluation of the quality of video transmitted over a real or simu lated communication network. Besides measuring QoS parameters of the underlyin g network, like loss rates, delays, and jitter, we support also a subjective vide o quality evaluation of the received video based on the frame-by-frame PSNR calcula tion. The tool-set has a modular construction, making it possible to exchange b oth the network and the codec. We present here its application for MPEG-4 as exam ple. EvalVid is targeted for researchers who want to evaluate their network designs or setups in terms of user perceived video quality. The tool-set is publi cly available [11].", "sha": "94d03985252d5a497feb407f1366fc29b34cb0fc", "corpus_id": 17012697 }
{ "doc_id": "16174710", "title": "Test-cost-sensitive attribute reduction", "abstract": "In many data mining and machine learning applications, there are two objectives in the task of classification; one is decreasing the test cost, the other is improving the classification accuracy. Most existing research work focuses on the latter, with attribute reduction serving as an optional pre-processing stage to remove redundant attributes. In this paper, we point out that when tests must be undertaken in parallel, attribute reduction is mandatory in dealing with the former objective. With this in mind, we posit the minimal test cost reduct problem which constitutes a new, but more general, difficulty than the classical reduct problem. We also define three metrics to evaluate the performance of reduction algorithms from a statistical viewpoint. A framework for a heuristic algorithm is proposed to deal with the new problem; specifically, an information gain-based λ-weighted reduction algorithm is designed, where weights are decided by test costs and a non-positive exponent λ, which is the only parameter set by the user. The algorithm is tested with three representative test cost distributions on four UCI (University of California Irvine) datasets. Experimental results show that there is a trade-off while setting λ, and a competition approach can improve the quality of the result significantly. This study suggests potential application areas and new research trends concerning attribute reduction.", "sha": "2ecc2ba6d85512d43e6308167c910327edb29d52", "corpus_id": 16174710 }
{ "doc_id": "12936739", "title": "Online Video Object Detection Using Association LSTM", "abstract": "Video object detection is a fundamental tool for many applications. Since direct application of image-based object detection cannot leverage the rich temporal information inherent in video data, we advocate to the detection of long-range video object pattern. While the Long Short-Term Memory (LSTM) has been the de facto choice for such detection, currently LSTM cannot fundamentally model object association between consecutive frames. In this paper, we propose the association LSTM to address this fundamental association problem. Association LSTM not only regresses and classifiy directly on object locations and categories but also associates features to represent each output object. By minimizing the matching error between these features, we learn how to associate objects in two consecutive frames. Additionally, our method works in an online manner, which is important for most video tasks. Compared to the traditional video object detection methods, our approach outperforms them on standard video datasets.", "sha": "a8123a4e68642b602b5094f2f670ed7aefdd2f58", "corpus_id": 12936739 }
{ "doc_id": "1188600", "title": "Siamese Instance Search for Tracking", "abstract": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "sha": "413bd84c4dbc3d5d1481d58a03e5890e1474935e", "corpus_id": 1188600 }
{ "doc_id": "18241720", "title": "Automatic Detection of Accretion of Glaucoma in Eye", "abstract": "Glaucoma arises due to the inadequate fluid flow from the drainage canals of the eye, leading to the crystallization of the fluid in the cornea and iris regions. We propose an algorithm to automatically compute this accretion from the ultrasound images of the eye. The existing techniques of glaucoma detection involve manual/computer aided strategies which confronts major challenges in the form of speed of processing and precision angle detection. Apart from improving the contrast of the low resolution ultrasound image, the algorithm aims to determine the exact location of the apex point of the anterior chamber region for efficient angle calculation. We realize that it is highly imperative to detect glaucoma in its early stages for diagnosis and hence the algorithm also addresses the importance of precise results with effective immunity towards speckle noise. The algorithm was tested with many samples of ultrasound images from diseased and normal eyes of a number of patients, which provided promising results in most cases.", "sha": "11feeb27d801e495802252052c0aab834396ecae", "corpus_id": 18241720 }
{ "doc_id": "51611504", "title": "Beyond Polarity: Interpretable Financial Sentiment Analysis with Hierarchical Query-driven Attention", "abstract": "Sentiment analysis has played a significant role in financial applications in recent years. The informational and emotive aspects of news texts may affect the prices, volatilities, volume of trades, and even potential risks of financial subjects. Previous studies in this field mainly focused on identifying polarity (e.g. positive or negative). However, as financial decisions broadly require justifications, only plausible polarity cannot provide enough evidence during the decision making processes of humanity. Hence an explainable solution is in urgent demand. In this paper, we present an interpretable neural net framework for financial sentiment analysis. First, we design a hierarchical model to learn the representation of a document from multiple granularities. In addition, we propose a query-driven attention mechanism to satisfy the unique characteristics of financial documents. With the domain specified questions provided by the financial analysts, we can discover different spotlights for queries from different aspects. We conduct extensive experiments on a real-world dataset. The results demonstrate that our framework can learn better representation of the document and unearth meaningful clues on replying different users’ preferences. It also outperforms the state-of-the-art methods on sentiment prediction of financial documents.", "sha": "62bc8f889c05e492b4672f2bc8f90727af3d0ac3", "corpus_id": 51611504 }
{ "doc_id": "7017535", "title": "Deep Learning for Event-Driven Stock Prediction", "abstract": "Neural Tensor Network for Learning Event Embeddings Event Representation E = (O1, P, O2, T), where P is the action, O1 is the actor, O2 is the object and T is the timestamp (T is mainly used for aligning stock data with news data). For example, the event “Sep 3, 2013 Microsoft agrees to buy Nokia’s mobile phone business for $7.2 billion.” is modeled as: (Actor = Microsoft, Action = buy, Object = Nokia’s mobile phone business, Time = Sep 3, 2013) Event Embedding", "sha": "4938e8c8c9ea3d351d283181819af5e5801efbed", "corpus_id": 7017535 }
{ "doc_id": "585279", "title": "Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields", "abstract": "Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. This paper describes how to extract a person’s activities and significant places from traces of GPS data. The system uses hierarchically structured conditional random fields to generate a consistent model of a person’s activities and places. In contrast to existing techniques, this approach takes the high-level context into account in order to detect the significant places of a person. Experiments show significant improvements over existing techniques. Furthermore, they indicate that the proposed system is able to robustly estimate a person’s activities using a model that is trained from data collected by other persons. KEY WORDS—activity recognition, conditional random fields, belief propagation, maximum pseudo-likelihood", "sha": "0b3c22e0aba108d47f11e8a5bc35f228f2849d95", "corpus_id": 585279 }
{ "doc_id": "3562704", "title": "Learning by Playing - Solving Sparse Reward Tasks from Scratch", "abstract": "We propose Scheduled Auxiliary Control (SACX), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors – from scratch – in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment – enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach. A video of the rich set of learned behaviours can be found at https://youtu.be/mPKyvocNe M.", "sha": "bc110c5f4d5e745956da802f1af77f4ac1b86a68", "corpus_id": 3562704 }
{ "doc_id": "997870", "title": "Divide-and-Conquer Reinforcement Learning", "abstract": "Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into “slices”, and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/.", "sha": "8bd9e4d58fe9e94912d3409d8f12974b68c65bc2", "corpus_id": 997870 }
{ "doc_id": "12455531", "title": "Hough Networks for Head Pose Estimation and Facial Feature Localization", "abstract": "Head pose estimation and facial feature localization are keys to advanced human computer interaction systems and human behavior analysis. Due to their relevance, both tasks have gained a lot of attention in the computer vision community. Recent state-of-the-art methods like [1, 2, 3, 6] report impressive results and are real-time capable. However, those approaches rely on hand-crafted features. In contrast, we try to learn a feature representation from a set of training images. This is done by utilizing Convolutional Neural Networks (CNNs), which have shown to achieve outstanding results on various tasks such as image classification [5]. Instead of segmenting the head in a first step and then regressing the task-dependent parameters, we show in our paper a patch-based approach. Patches are densely extracted from the image along a regular grid and for each patch we perform a joint classification and regression. The classification segments the image patches into foreground and background, whereas the regression casts votes in a Hough space, but only for foreground patches. This is similar to the idea of Hough Forests (HFs) [4]. However, we replace the Random Forest (RF) with a CNN and call it therefore Hough Network (HN). Assuming that we have a training dataset {(xs, ts)}s=1 with S samples, where xs denotes an image patch, and ts encodes the foregroundbackground information as well as the regression targets, we want to train a CNN that minimizes the following error function", "sha": "18dfc2434a95f149a6cbb583cca69a98c9de9887", "corpus_id": 12455531 }
{ "doc_id": "13262103", "title": "Efficient PSD Constrained Asymmetric Metric Learning for Person Re-Identification", "abstract": "Person re-identification is becoming a hot research topic due to its value in both machine learning research and video surveillance applications. For this challenging problem, distance metric learning is shown to be effective in matching person images. However, existing approaches either require a heavy computation due to the positive semidefinite (PSD) constraint, or ignore the PSD constraint and learn a free distance function that makes the learned metric potentially noisy. We argue that the PSD constraint provides a useful regularization to smooth the solution of the metric, and hence the learned metric is more robust than without the PSD constraint. Another problem with metric learning algorithms is that the number of positive sample pairs is very limited, and the learning process is largely dominated by the large amount of negative sample pairs. To address the above issues, we derive a logistic metric learning approach with the PSD constraint and an asymmetric sample weighting strategy. Besides, we successfully apply the accelerated proximal gradient approach to find a global minimum solution of the proposed formulation, with a convergence rate of O(1/t^2) where t is the number of iterations. The proposed algorithm termed MLAPG is shown to be computationally efficient and able to perform low rank selection. We applied the proposed method for person re-identification, achieving state-of-the-art performance on four challenging databases (VIPeR, QMUL GRID, CUHK Campus, and CUHK03), compared to existing metric learning methods as well as published results.", "sha": "3147bb14bf4228735ecf4bc2a421590b3de86c0f", "corpus_id": 13262103 }
{ "doc_id": "15225312", "title": "Evaluating Appearance Models for Recognition, Reacquisition, and Tracking", "abstract": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.", "sha": "6273b3491e94ea4dd1ce42b791d77bdc96ee73a8", "corpus_id": 15225312 }
{ "doc_id": "48157786", "title": "M Ult Ivar Ia Ble Functional Interpolation and Adaptive N Etworks", "abstract": "A b st r act . The relationship between \"learning\" in ad aptive layered networks and the fit ting of data wit h high dimensional surfaces is discussed . T his leads natu rally to a picture of \"generalization\" in terms of interp olation between known data points and suggests a rat ional approach to th e theory of such networks. A class of adaptive networks is identified which makes the inte rpo lation scheme explicit. This class has the property t ha t learning is equivalent to the solution of a set of linear equations. T hese netwo rks t hus represent nonlinear relati onships while ha ving a guaranteed learning rule.", "sha": "bd6547eda7da56eaeecda6d3adbb6a7046e92452", "corpus_id": 48157786 }
{ "doc_id": "31337844", "title": "The Effect of On-Line Consumer Reviews on Consumer Purchasing Intention: The Moderating Role of Involvement", "abstract": "On-line consumer reviews, functioning both as informants and as recommenders, are important in making purchase decisions and for product sales. Their persuasive impact depends on both their quality and their quantity. This paper uses the elaboration likelihood model to explain how level of involvement with a product moderates these relationships. The study produces three major findings: (1) the quality of on-line reviews has a positive effect on consumers’ purchasing intention, (2) purchasing intention increases as the number of reviews increases, and (3) low-involvement consumers are affected by the quantity rather than the quality of reviews, but high-involvement consumers are affected by review quantity mainly when the review quality is high. These findings have implications for on-line sellers in terms of how to manage on-line consumer reviews.", "sha": "2b4b8ef0ce32436fb7ee6599b65ead1512a927e2", "corpus_id": 31337844 }
{ "doc_id": "13290146", "title": "Online Consumer Review: Word-of-Mouth as a New Element of Marketing Communication Mix", "abstract": "A a new type of word-of-mouth information, online consumer product review is an emerging market phenomenon that is playing an increasingly important role in consumers’ purchase decisions. This paper argues that online consumer review, a type of product information created by users based on personal usage experience, can serve as a new element in the marketing communications mix and work as free “sales assistants” to help consumers identify the products that best match their idiosyncratic usage conditions. This paper develops a normative model to address several important strategic issues related to consumer reviews. First, we show when and how the seller should adjust its own marketing communication strategy in response to consumer reviews. Our results reveal that if the review information is sufficiently informative, the two types of product information, i.e., the seller-created product attribute information and buyer-created review information, will interact with each other. For example, when the product cost is low and/or there are sufficient expert (more sophisticated) product users, the two types of information are complements, and the seller’s best response is to increase the amount of product attribute information conveyed via its marketing communications after the reviews become available. However, when the product cost is high and there are sufficient novice (less sophisticated) product users, the two types of information are substitutes, and the seller’s best response is to reduce the amount of product attribute information it offers, even if it is cost-free to provide such information. We also derive precise conditions under which the seller can increase its profit by adopting a proactive strategy, i.e., adjusting its marketing strategies even before consumer reviews become available. Second, we identify product/market conditions under which the seller benefits from facilitating such buyer-created information (e.g., by allowing consumers to post user-based product reviews on the seller’s website). Finally, we illustrate the importance of the timing of the introduction of consumer reviews available as a strategic variable and show that delaying the availability of consumer reviews for a given product can be beneficial if the number of expert (more sophisticated) product users is relatively large and cost of the product is low.", "sha": "c23c263bedb8662ebc404f4e4a6b8c86a1acdef5", "corpus_id": 13290146 }
{ "doc_id": "19018429", "title": "Information-seeking strategies of novices using a full-text electronic encyclopedia", "abstract": "An exploratory study was conducted of elementary school children searching a full-text electronic encyclopedia on CD-ROM. Twenty-eight third and fourth graders and 24 sixth graders conducted two assigned searches, one open-ended, the other one closed, after two demonstration sessions. Keystrokes captured by the computer and observer notes were used to examine user information-seeking strategies from a mental model perspective. Older searchers were more successful in finding required information, and took less time than younger searchers. No differences in total number of moves were found. Analysis of search patterns showed that novices used a heuristic, highly interactive search strategy. Searchers used sentence and phrase queries, indicating unique mental models for this search system. Most searchers accepted system defaults and used the AND connective in formulating queries. Transition matrix analyses showed that younger searchers generally favored query refining moves and older searchers favored examining title and text moves. Suggestions for system designers were made and future research questions were identified.", "sha": "c619d3e2b198bb0889f914af3c3b616570a54221", "corpus_id": 19018429 }
{ "doc_id": "32286473", "title": "A Novel Voltage-Feedback Pixel Circuit for AMOLED Displays", "abstract": "This study presents a novel voltage-modulated pixel circuit for active-matrix organic light-emitting diode (AMOLED) consisting of five n-type thin-film transistors (TFTs), one additional control signal, and one storage capacitor. The proposed circuit, which can be implemented in all-n-type and all-p-type low temperature poly-silicon (LTPS) TFT technologies, successfully compensates for threshold voltage deviation of TFTs and facilitates correction of OLED degradation using a voltage feedback method. Simulation and experimental results for all-n-type TFTs indicate that the proposed pixel circuit reduced the nonuniformity brightness problem effectively by compensating for threshold voltage variation in TFTs and reduced the degradation of emission efficiency in OLEDs.", "sha": "10cfb105e300e64208ca53aa3e221ef6d82e6c0d", "corpus_id": 32286473 }
{ "doc_id": "8902139", "title": "A new pixel circuit for active matrix organic light emitting diodes", "abstract": "We propose a new thin-film-transistor (TFT) pixel circuit for active-matrix organic light-emitting diode (AMOLED) composed of four TFTs and two capacitors. The simulation results, based on the device performances measured for an OLED and a poly-Si TFT, indicate that the proposed circuit has high immunity to the variation of poly-Si TFT characteristics.", "sha": "e717b4aa145d5c953720b33ce1d62b9986dc60f2", "corpus_id": 8902139 }
{ "doc_id": "152073", "title": "Investigating the Role of Leadership and Organizational Culture in Fostering Innovation Ambidexterity", "abstract": "It has been argued that strategic leadership plays a crucial role in mediating between forces for exploration such as innovation and change, and inertial forces for exploitation of the status quo. In mediating these contradictory forces, strategic leaders need to make decisions and take actions that enable and encourage the firm to balance exploration as well as exploitation. The purpose of this study is to investigate how strategic leaders create an organization culture within which the contradictory forces for exploration and exploitation vie. Although it has been argued that strategic leadership plays a crucial role in fostering exploration and exploitation, this is the first study to empirically investigate the multiple roles that leaders need to play in creating a culture that in turn, facilitates exploration and exploitation activities in the form of incremental and radical product and process innovation. By doing so, our study contributes to our understanding of ambidexterity by identifying the multiple roles that leaders play, as well as the role organization culture plays in mediating the leadership and ambidexterity relationship. Using empirical data derived from a research study of 125 firms in Taiwan, we applied multiple regression analyses-the Sobel and Bootstrapping approach to test our hypotheses that: 1) strategic leadership directly impacts on a knowledge-sharing culture, 2) a knowledge-sharing culture directly impacts on innovation ambidexterity, and 3) organizational culture mediates the strategic leadership and innovation ambidexterity relationship. All three of our hypotheses were supported.", "sha": "85aa7f90f7008dd1898b204a1abf0397e7e56f60", "corpus_id": 152073 }
{ "doc_id": "3029264", "title": "No bad local minima: Data independent training error guarantees for multilayer neural networks", "abstract": "We use smoothed analysis techniques to provide guarantees o n the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions , quadratic loss and a single output, under mild over-parametrization. We prove t hat for a MNN with one hidden layer, the training error is zero at every differenti able local minimum, for almost every dataset and dropout-like noise realization. W e then extend these results to the case of more than one hidden layer. Our theoretic al guarantees assume essentially nothing on the training data, and are verified nu merically. These results suggest why the highly non-convex loss of such MNNs can be eas ily optimized using local updates (e.g., stochastic gradient descent), a s observed empirically.", "sha": "6c0b53ab8199b773e63821a0e31b31588f3b4c77", "corpus_id": 3029264 }
{ "doc_id": "17905941", "title": "Gradient Descent Converges to Minimizers", "abstract": "We show that gradient descent converges to a local minimizer , almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.", "sha": "9b8be6c3ebd7a79975067214e5eaea05d4ac2384", "corpus_id": 17905941 }
{ "doc_id": "24813971", "title": "MEMS Electrometer With Femtoampere Resolution for Aerosol Particulate Measurements", "abstract": "Electrostatic charge measurements are at the base of chemical, physical and biological experiments. In this paper, we present an electrometer based on the vibrating capacitance of a microelectromechanical systems (MEMS) resonator for the detection of small currents from ionized particles in an aerosol particle detection system. We use a porous sensing-electrode coupled to a MEMS resonating electrometer. Operating at resonance, charge is collected on the MEMS electrometer and modulated at the resonant frequency and its harmonics. Induced voltage is read with a low-leakage very high-input impedance feedback amplifier. Because of the specific readout technique, a switched-reset is used to prevent charge saturation. Sensitivity improvements are achieved by modifying the low noise-readout amplifier by reducing input-referred noise and parasitic capacitance. The electrometer achieves a noise floor <;1 fA produced by 10 nm diameter particles within an airflow of 1.0 L/min. At this flow rate, the minimum detectable current (1 fA) corresponds to a minimum measureable particle density of 400 cm-3. The MEMS electrometer is compared with and calibrated against commercial electrometer and a particle counter, respectively.", "sha": "d03a85e714de074fcd95ad10efd2038c4c6b1672", "corpus_id": 24813971 }
{ "doc_id": "5275472", "title": "Forecasting stock price directional movements using technical indicators: Investigating window size effects on one-step-ahead forecasting", "abstract": "Accurate forecasting of directional changes in stock prices is important for algorithmic trading and investment management. Technical analysis has been successfully used in financial forecasting and recently researchers have explored the optimization of parameters for technical indicators. This study investigates the relationship between the window size used for calculating technical indicators and the accuracy of one-step-ahead (variable steps) forecasting. The directions of the future price movements are predicted using technical analysis and machine learning algorithms. Results show a correlation between window size and forecasting step size for the Support Vector Machines approach but not for the other approaches.", "sha": "ef0c711fe5970b09bae538fa5d841231be6f10d1", "corpus_id": 5275472 }
{ "doc_id": "6294728", "title": "An overview of statistical learning theory", "abstract": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).", "sha": "4609f6bdc3beab00c9beceaa12dd8101fefe6f1c", "corpus_id": 6294728 }
{ "doc_id": "15817246", "title": "Considerations in Dynamic Time Warping Algorithms for Discrete Word Recognition", "abstract": "The technique of dynamic time warping for time registration of a reference and test utterance has found widespread use in the areas of speaker verification and discrete word recognition. As originally proposed, the algorithm placed strong constraints on the possible set of dynamic paths—namely it was assumed that the initial and final frames of both the test and reference utterances were in exact time synchrony. Because of inherent practical difficulties with satisfying the assumptions under which the above constraints are valid, we have considered some modifications to the dynamic time warping algorithm. In particular, an algorithm in which an uncertainty exists in the registration both for initial and final frames was studied. Another modification constrains the dynamic path to follow (within a given range) the path which is locally optimum at each frame. This modification tends to work well when the location of the final frame of the test utterance is significantly in error due to breath noise, etc. To test the different time warping algorithms a set of ten isolated words spoken by 100 speakers was used. Probability density functions of the distances from each of the 100 versions of a word to a reference version of the word were estimated for each of three dynamic warping algorithms. From these data, it is shown that, based on a set of assumptions about the distributions of the distances, the warping algorithm that minimizes the overall probability of making a word error is the modified time warping algorithm with unconstrained endpoints. A discussion of this key result along with some ideas on where the other modifications would be most useful is included.", "sha": "de212ed4c124091c31800c041acabb5dd2e92a9a", "corpus_id": 15817246 }
{ "doc_id": "33481321", "title": "Flow Motifs in Soccer : What can passing behavior tell us ?", "abstract": "In soccer, both individual and team performance is crucial to win matches. Passing is the backbone of the game and forms the basis of important decisions made by managers and owners; such as buying players, picking offensive or defensive strategies or even defining a style of play. These decisions can be supported by analyzing how a player performs and how his style affects team performance. The flow of a player or a team can be studied by finding unique passing motifs from the patterns in the subgraphs of a possession-passing network of soccer games. These flow motifs can be used to analyze individual players and teams based on the diversity and frequency of their involvement in different motifs. Building on the flow motif analyses, we introduce an expected goals model to measure the effectiveness of each style of play. We also make use of a novel way to represent motif data that is easy to understand and can be used to compare players, teams and seasons. Further, we exploit the relationship between play style and the pass probability matrix to support our analysis. Our data set has the last 4 seasons of 6 big European leagues with 8219 matches, 3532 unique players and 155 unique teams. We will use flow motifs to analyze different events, such as for example the transfer of Claudio Bravo to Pep Guardiola’s Manchester City, who Jean Seri is and why he must be an elite midfielder and the difference in attacking style between Lionel Messi and Cristiano Ronaldo. Ultimately, an analysis of Post-Fàbregas Arsenal is conducted wherein different techniques are combined to analyze the impact the acquisition of Mesut Özil and Alexis Sánchez had on the strategies implemented at Arsenal.", "sha": "4a011ce69bdc4d9f223bb6f8ca1f479afdd0c256", "corpus_id": 33481321 }
{ "doc_id": "691081", "title": "Mean Shift: A Robust Approach Toward Feature Space Analysis", "abstract": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "sha": "1314e6ea34a8d749ca6190a0d2dd00b3a1879cc6", "corpus_id": 691081 }
{ "doc_id": "12589178", "title": "Design and Analysis of 6-DOF Triple Scissor Extender Robots With Applications in Aircraft Assembly", "abstract": "A new type of parallel robot mechanism with an extendable structure is presented, and its kinematic properties and design parameters are analyzed. The triple scissor extender (TSE) is a six-degree-of-freedom robotic mechanism for reaching high ceilings and positioning an end effector. Three scissor mechanisms are arranged in parallel, with the bottom ends coupled to linear slides, and the top vertex attached to an end effector plate. Arbitrary positions and orientations of the end effector can be achieved through the coordinated motion of the six linear actuators located at the base. By changing key geometric parameters, the TSE's design can yield a specific desired workspace volume and differential motion behavior. A general kinematic model for diverse TSEs is derived, and the kinematic properties, including workspace, singularity, and the Jacobian singular values, are evaluated. From these expressions, four key design parameters are identified, and their sensitivity upon the workspace volume and the Jacobian singular values is analyzed.A case study in autonomous aircraft assembly is presented using the insights gained from the design parameter studies.", "sha": "07d2e809b3a5490ad14d13d86d0e538d7e31f517", "corpus_id": 12589178 }
{ "doc_id": "42196083", "title": "A 50-MHz Fully Integrated Low-Swing Buck Converter Using Packaging Inductors", "abstract": "Implementation of on-chip passive elements and efficient regulation schemes are key aspects of fully integrated dc-dc converter design. This paper presents a 50-MHz fully integrated buck converter equipped with packaging inductors. These inductors include parasitic inductances of the bonding wires and lead frames in the package. They have significantly better Q factors than the best on-chip inductors implemented on silicon. This paper also presents full-swing and low-swing gate drivers for efficient regulation of high-frequency switching converters. The low-swing driver uses the drop voltage of a diode-connected transistor and is applied in a fabricated converter to reduce the gate driving loss caused by the high switching operation. The proposed converter is designed and fabricated using a 0.13-μm 1-poly 6-metal CMOS process. The fully integrated buck converter achieves 68.7% and 76.8 % efficiency for 3.3 V/2.0 V and 2.5 V/1.8 V conversions, respectively, while providing a load current of 250 mA.", "sha": "390fb713dd4b0fbc410d6b83a81729bd3a548b17", "corpus_id": 42196083 }
{ "doc_id": "15780277", "title": "Design issues for monolithic DC-DC converters", "abstract": "This paper presents various ideas for integrating different components of dc-dc converter on to a silicon chip. These converters are intended to process power levels up to 0.5W. Techniques for integrating capacitors and design issues for MOS transistors are discussed. The most complicated design issue involves inductors. Expressions for trace resistance and inductance estimation of on-chip planar spiral inductor on top metal layer of CMOS process are compared. These inductors have high series resistance due to low metal trace thickness, capacitive coupling with substrate and other metal traces, and eddy current loss. As an alternative, a CMOS compatible three-dimensional (3-D) surface micromachining technology known as plastic deformation magnetic assembly (PDMA) is used to fabricate high quality inductors with small footprints. Experimental results from a monolithic buck converter using this PDMA inductor are presented. A major conclusion of this work is that the 3-D \"post-process\" technology is more viable than traditional integrated circuit assembly methods for realizing of micro-power converters.", "sha": "668e6e8d0905ef3b5634b5096be44225b6ee9c02", "corpus_id": 15780277 }
{ "doc_id": "15648302", "title": "Mobility and eccentricity analysis of a helical belt drive in-pipe robot", "abstract": "Pipeline grids of various size and material are pervasive in today's modern society. The frequent inspection and maintenance of such pipeline grids have presented a tremendous challenge. It is advocated that only advanced robot design embedded with intelligent electronics and control algorithms could perform the job. Given the ever increasing demands for intelligent in-pipe robots, various in-pipe drive mechanisms have been reported. One of the simplest is helical drives that have only one degree of freedom. All previously reported in-pipe helical drives are based on independent passive wheels that are tilted an angle. In this paper, the wheels are replaced by pulleys that are synchronized by a timing belt. This small change will result in significant improvement as will be highlighted in this paper. In this paper, the mobility of the robot design is analyzed. Uneven tracking force occurs at the bend is also illustrated. An experiment has been conducted to show the steady traction force.", "sha": "c094a9f08d9f73aba426c9bb832cd79f918f080c", "corpus_id": 15648302 }
{ "doc_id": "49568851", "title": "Differential quasi self-complimentary (QSC) ultra-wideband (UWB) MIMO antenna", "abstract": "In this paper, a differentially excited ultra-wideband (UWB) MIMO antenna is designed for pattern diversity applications, utilizing quasi self-complimentary (QSC) antenna elements. Four QSC elements are realized using half- octagon shaped monopoles, and their complementary cuts from the ground plane. Due to the QSC property, wide impedance bandwidth covering the UWB frequency range of 3–11 GHz is achieved within very less antenna footprint. The QSC elements are placed symmetrically about the four edges of a square substrate. Two oppositely positioned elements are fed with 180o phase difference, in order to form a single differential pair. Two such differential pairs are placed perpendicular to each other for realizing pattern diversity performance with high value of differential isolation.", "sha": "17c20fc0d0808d1c685159a52101422b8c7d6868", "corpus_id": 49568851 }
{ "doc_id": "25488950", "title": "A Compact Microstrip-Fed Triple Band-Notched UWB Monopole Antenna", "abstract": "In this letter, a novel low-profile microstrip-fed compact triple band-notched ultrawideband (UWB) antenna is proposed. Notch bands around the 3.3-3.8-GHz WiMAX and 5.15-5.85-GHz WLAN frequencies are obtained by etching out two elliptic single complementary split-ring resonators (ESCSRRs) of different dimensions from the radiating patch of the antenna. Furthermore, by placing two rectangular split-ring resonators near the feedline-patch junction of the antenna, rejection for the 7.9-8.4-GHz X-band frequencies is achieved. Design guidelines for implementing the notch-bands at the desired frequency regions are provided. The match between the simulated and experimental results suggests that the proposed antenna can be a good candidate for application in UWB communication systems.", "sha": "cc7cb6acd1a45d2db29f5d0070ddddffb89a5456", "corpus_id": 25488950 }
{ "doc_id": "44111606", "title": "LSTMs Exploit Linguistic Attributes of Data", "abstract": "While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM’s ability to learn a nonlinguistic task: recalling elements from its input. We find that models trained on natural language data are able to recall tokens from much longer sequences than models trained on non-language sequential data. Furthermore, we show that the LSTM learns to solve the memorization task by explicitly using a subset of its neurons to count timesteps in the input. We hypothesize that the patterns and structure in natural language data enable LSTMs to learn by providing approximate ways of reducing loss, but understanding the effect of different training data on the learnability of LSTMs remains an open question.", "sha": "b220f5bcb4690c54113d653feacf047880983f2c", "corpus_id": 44111606 }
{ "doc_id": "31080775", "title": "STUDYING THE IMPACT OF ORGANIZATIONAL CULTURE ON KNOWLEDGE MANAGEMENT EFFECTIVENESS WITH THE INTERMEDIATE ROLE OF KNOWLEDGE SHARING PROCESS AMONG MELI BANK STAFF OF MELI BANK IN ILAM PROVINCE", "abstract": "General aim of this research is studying the amount of impact of organizational culture on Knowledgemanagement effectiveness with the intermediate role of knowledge sharing process among staff of Meli bank in Ilam province.Statistical population islimited and includes 420 subjects of Meli bank staffs in Ilam province. Sample volume was determined 200 participants with using kokaran formula and sample method was classification. This sample volume is proportional with each category (the number of Meli bank staff in any town) and base on it some questioner were distributed among them. The questioner which was used for three variables of organizational culture, knowledge management effectiveness and knowledge sharing process was Moon and Lee’s standard questioner (2014) which has high validity and its reliability was obtained base on Cronach alpha and it was 0.948. It shows that reliability of questioner is pleasant. Inorder to consider normality of data distribution of variables Kolmogorov-Smirnov test was used. The used statistical method for data analysis is modeling structural equations by LISRAL software. The result of research shows that variable of organizational culture has direct and positive impact on knowledge sharing process and knowledge management effectiveness. Also knowledge sharing process has a significant and positive impact on knowledgemanagement effectiveness and finally, organizational culture has significant and positive impact on knowledgemanagement effectiveness with intermediate role of knowledge sharing, and all of hypotheses were accepted with confidence level of %95. Therefor it can be concluded that existence of organizational culture is a motivation for affecting knowledge sharing on knowledgemanagement effectiveness among staff of Meli bank in Ilam province.", "sha": "1ddadf16a32ccf54252fb8719ddbb229e29ac7ae", "corpus_id": 31080775 }
{ "doc_id": "32192902", "title": "Knowledge Management: An Organizational Capabilities Perspective", "abstract": null, "sha": "3a219bde642e180756daaafc44abf6ab10fbb669", "corpus_id": 32192902 }
{ "doc_id": "37112944", "title": "Mode reconfigurable bistable spiral antenna based on kresling origami", "abstract": "This paper presents a mode reconfigurable origami spiral antenna using the bistable property of Kresling origami. An analytical method is proposed to find a suitable Kresling conical pattern for antenna applications that can hold its shape automatically by its internal tension after being deployed or collapsed. From the deployed (unfolded) state to the collapsed (folded) state, the bifilar spiral antenna is reconfigured from conical to planar with its main beam being reconfigured from two to one. This antenna is circularly polarized from 2.5 GHz to 3.2 GHz with realized gain from 6.2 dB to 8.9 dB and can be used for satellite communication.", "sha": "d37aa85aa5ade4ce8f8a3245570584932df94235", "corpus_id": 37112944 }
{ "doc_id": "20418472", "title": "Utilizing dependency relationships between math expressions in math IR", "abstract": "Current mathematical search systems allow math expressions within a document to be queried using math expressions and keywords. To accept such queries, math search systems must index both math expressions and textual information in documents. Each indexed math expression is usually associated with all the words in its surrounding context within a given window size. However, we found that this context is often ineffective for explaining math expressions in scientific papers. The meaning of an expression is usually defined in the early part of a document, and the meaning of each symbol contained in the expression can be useful for explaining the entire expression. This explanation may not be captured within the context of a math expression, unless we set the context to have a very wide window size. However, widening the window size also increases the proportion of words that are unrelated to the expression. This paper proposes the use of dependency relationships between math expressions to enrich the textual information of each expression. We examine the influence of this enrichment in a math search system. The experimental results show that significantly better precision can be obtained using the enriched textual information rather than the math expressions’ own textual information. This indicates that the enrichment of textual information for each math expression using dependency relationships enhances the math search system.", "sha": "8800817b895c6213d3a53aa40f08cbe9fd9da1b7", "corpus_id": 20418472 }
{ "doc_id": "15275328", "title": "MCAT Math Retrieval System for NTCIR-12 MathIR Task", "abstract": "This paper describes the participation of our MCAT search system in the NTCIR-12 MathIR Task. We introduce three granularity levels of textual information, new approach for generating dependency graph of math expressions, score normalization, cold-start weights, and unification. We find that these modules, except the cold-start weights, have a very good impact on the search performance of our system. The use of dependency graph significantly improves precision of our system, i.e., up to 24.52% and 104.20% relative improvements in the Main and Simto subtasks of the arXiv task, respectively. In addition, the implementation of unification delivers up to 2.90% and 57.14% precision improvements in the Main and Simto subtasks, respectively. Overall, our best submission achieves P@5 of 0.5448 in the Main subtask and 0.5500 in the Simto subtask. In the Wikipedia task, our system also performs well at the MathWikiFormula subtask. At the MathWiki subtask, however, due to a problem with handling queries formed as questions that contain many stop words, our system finishes second.", "sha": "6995bf023d37a5fc10fe60d3783772801994751d", "corpus_id": 15275328 }
{ "doc_id": "15803566", "title": "A Comparison of Land Surface Water Mapping Using the Normalized Difference Water Index from TM, ETM+ and ALI", "abstract": "Remote sensing has more advantages than the traditional methods of land surface water (LSW) mapping because it is a low-cost, reliable information source that is capable of making high-frequency and repeatable observations. The normalized difference water indexes (NDWIs), calculated from various band combinations (green, near-infrared (NIR), or shortwave-infrared (SWIR)), have been successfully applied to LSW mapping. In fact, new NDWIs will become available when Advanced Land Imager (ALI) data are used as the ALI sensor provides one green band (Band 4), two NIR bands (Bands 6 and 7), and three SWIR bands (Bands 8, 9, and 10). Thus, selecting the optimal band or combination of bands is critical when ALI data are employed to map LSW using NDWI. The purpose of this paper is to find the best performing NDWI model of the ALI data in LSW map. In this study, eleven NDWI models based on ALI, Thematic Mapper (TM), and Enhanced Thematic Mapper Plus (ETM+) data were compared to assess the performance of ALI data in LSW mapping, at three different study sites in the Yangtze River Basin, China. The contrast method, Otsu method, and confusion matrix were calculated to OPEN ACCESS Remote Sens. 2013, 5 5531 evaluate the accuracies of the LSW maps. The accuracies of LSW maps derived from eleven NDWI models showed that five NDWI models of the ALI sensor have more than an overall accuracy of 91% with a Kappa coefficient of 0.78 of LSW maps at three test sites. In addition, the NDWI model, calculated from the green (Band 4: 0.525–0.605 μm) and SWIR (Band 9: 1.550–1.750 μm) bands of the ALI sensor, namely NDWIA4,9, was shown to have the highest LSW mapping accuracy, more than the other NDWI models. Therefore, the NDWIA4,9 is the best indicator for LSW mapping of the ALI sensor. It can be used for mapping LSW with high accuracy.", "sha": "409ca567c228d0bd5abda695669a3ff52d59420c", "corpus_id": 15803566 }
{ "doc_id": "16956260", "title": "A 28-nm CMOS 40-GHz high-resolution digitally controlled oscillator for automotive radar applications", "abstract": "We present the first millimeter-wave digitally controlled oscillator (DCO) in a 28-nm CMOS technology. This DCO is developed as an integral part of all-digital phase-locked loop for automotive radar applications. The DCO achieves a tuning range of 3.6 GHz (39.3–42.9 GHz) with a fine frequency resolution less than 1 MHz and phase noise of −99.5 dBc/Hz at 1-MHz offset at 39.3 GHz. The DCO occupies 0.08 mm2 and consumes 10.5 mW. Impacts on DCO design from closely filled dummies to meet extremely stringent local density rules in the 28-nm CMOS process, which has not been investigated before, are also discussed.", "sha": "9b2fe7a8a380fd26e376d03394ea9e04f7eeac97", "corpus_id": 16956260 }
{ "doc_id": "207030563", "title": "A 56.4-to-63.4 GHz Multi-Rate All-Digital Fractional-N PLL for FMCW Radar Applications in 65 nm CMOS", "abstract": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.", "sha": "aec39f9c8469e0c5432d37996e4e3f0b87c6f694", "corpus_id": 207030563 }
{ "doc_id": "28655547", "title": "A systematic review on the sambuci fructus effect and efficacy profiles.", "abstract": "The berries of European elder are used in traditional German medicine for various complaints. Due to insufficient research data, elderberry fruit was not monographed by the German Commission E at the end of the last century. A comprehensive review of the literature was conducted to summarize the pharmacological and clinical effects of elderberry fruit. Several databases and other sources were searched to identify in vitro and animal studies, and clinical trials investigating elderberry fruit preparations. For the latter, the level of evidence was evaluated as described previously. Elderberry fruit preparations may provide antioxidant, antiviral and antiproliferative effects in vitro. One animal experiment and one clinical trial were able to back the antioxidative impact in terms of a weak antilipidemic effect. Antibacterial and antiinflammatory effects seem possible, but need further support. In rats, an aqueous elderberry fruit extract produced central depression and analgesia and an ethanol fruit extract improved acetic acid-induced colitis. Several in vitro studies together with two exploratory studies in humans and one open study in chimpanzees indicate that the aqueous elderberry extract Sambucol may be useful for the treatment of viral influenza infections. These promising effects of elderberry fruit preparations from experimental and clinical studies should be backed by more rigorous studies before these preparations are recommended in the prevention of diseases and in treatment schedules.", "sha": "760d9cc1a191aeb864925895315341767712a137", "corpus_id": 28655547 }
{ "doc_id": "3188105", "title": "Big data and clustering algorithms", "abstract": "Data mining is the method which is useful for extracting useful information and data is extorted, but the classical data mining approaches cannot be directly used for big data due to their absolute complexity. The data that is been formed by numerous scientific applications and incorporated environment has grown rapidly not only in size but also in variety in recent era. The data collected is of very large amount and there is difficulty in collecting and assessing big data. Clustering algorithms have developed as a powerful meta learning tool which can precisely analyze the volume of data produced by modern applications. The main goal of clustering is to categorize data into clusters such that objects are grouped in the same cluster when they are “similar” according to similarities, traits and behavior. The most commonly used algorithm in clustering are partitioning, hierarchical, grid based, density based, and model based algorithms. A review of clustering and its different techniques in data mining is done considering the criteria's for big data. Where most commonly used and effective algorithms like K-Means, FCM, BIRCH, CLIQUE algorithms are studied and compared on big data perspective.", "sha": "56f2151328d5f71a408d5fa04e9c3e044170210f", "corpus_id": 3188105 }
{ "doc_id": "10187537", "title": "Distributed QoS Evaluation for Real-World Web Services", "abstract": "Quality-of-Service (QoS) is widely employed for describing non-functional characteristics of Web services. Although QoS of Web services has been investigated in a lot of previous works, there is a lack of real-world Web service QoS datasets for validating new QoS based techniques and models of Web services. To study the performance of real-world Web services as well as provide reusable research datasets for promoting the research of QoS-driven Web services, we conduct several large-scale evaluations on real-world Web services. Firstly, addresses of 21,358 Web services are obtained from the Internet. Then, invocation failure probability performance of 150 Web services is assessed by 100 distributed service users. After that, response time and throughput performance of 5,825 Web services are evaluated by 339 distributed service users. Detailed experimental results are presented in this paper and comprehensive Web service QoS datasets are publicly released for future research.", "sha": "75db1aa3d2bbfef6c0739df2edfefbd6320a8837", "corpus_id": 10187537 }
{ "doc_id": "9703480", "title": "Privacy-Preserving Distributed Profile Matching in Proximity-based Mobile Social Networks", "abstract": "The Profile matching means that 2 users scrutiny their personal profiles and is usually the primary step towards effective PMSN. It, however, conflicts with users’ growing privacy issues regarding revealing their personal profiles to finish strangers before deciding to move with them Our protocols modify 2 users to perform profile matching while not revealing any data regarding their profiles on the far side the comparison resul creating new connections in keeping with personal preferences could be a crucial service in mobile social networking, wherever associate initiating user will notice matching users inside physical proximity of him/her. In existing systems for such services, sometimes all the users directly publish their complete profiles for others to go looking. However, in several applications, the users’ personal profiles might contain sensitive data that they are doing not wish to form public. during this paper, we have a tendency to propose FindU, a group of privacy-preserving profile matching schemes for proximitybased mobile social networks. In FindU, associate initiating user will notice from a bunch of users the one whose profile best matches with his/her; to limit the chance of privacy exposure, solely necessary and stripped-down data regarding the personal attributes of the taking part users is changed. 2 increasing levels of user privacy ar outlined, with decreasing amounts of discovered profile data. investing secure multi-party computation (SMC) techniques, we have a tendency to propose novel protocols that understand every of the user privacy levels, which may even be personalised by the users. we offer formal security proofs and performance analysis on our schemes, and show their benefits in each security and potency over progressive schemes. The social proximity between 2 users because the matching metric, that measures the space between their social coordinates with every being a vector precomputed by a trusty central server to represent the situation of a user in a web social network. By comparison, our work doesn't deem the affiliation of PMSN users with one on-line social network and addresses a additional general personal matching drawback for PMSN by supports fine-grained personal profiles and a large spectrum of matching metrics", "sha": "5f908c77356c72b5a92aee4d718be4e4abb529c9", "corpus_id": 9703480 }
{ "doc_id": "1027949", "title": "Concreteness and Subjectivity as Dimensions of Lexical Meaning", "abstract": "We quantify the lexical subjectivity of adjectives using a corpus-based method, and show for the first time that it correlates with noun concreteness in large corpora. These cognitive dimensions together influence how word meanings combine, and we exploit this fact to achieve performance improvements on the semantic classification of adjective-noun pairs.", "sha": "8eeaa2fc57043df64b87590c0c38e0ac8e5f22cb", "corpus_id": 1027949 }
{ "doc_id": "18322326", "title": "Literal and Metaphorical Sense Identification through Concrete and Abstract Context", "abstract": "Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjectivenoun phrases (e.g., in dark comedy , the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-theart performance on both datasets.", "sha": "cb0d80684acc65c6a6b7d291cb5a0e70b9317899", "corpus_id": 18322326 }
{ "doc_id": "13975573", "title": "Real-time background subtraction based on GPGPU for high-resolution video surveillance", "abstract": "Demand for intelligent surveillance has been increasing, to automatically detect and prevent dangerous situations with surveillance cameras. Image analysis, the most essential element in intelligent surveillance system, has continuously developed and contributed to the improvement. To analyze surveillance videos, foreground segmentation is vital which require background modeling. This paper proposes background modeling method which is robust to illumination variation and shadow area. Also, the proposed method is applicable to high-resolution videos in real time with modification for GPU implementation. We validate our method on different types of dataset including our new benchmark dataset to analyze the result quantitatively and qualitatively. The execution time of proposed method is 228.2 FPS for High Definition videos with NVIDIA GTX660.", "sha": "d146ee87b9465a052d4545edddc5e3abe03cfbf0", "corpus_id": 13975573 }
{ "doc_id": "46921993", "title": "The Singularity Isn't Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact", "abstract": "It seems to be accepted that intelligence—artificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). However, such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms, so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent; and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself.", "sha": "6d6caf1682412e0488002cf04811747b6d4b86b8", "corpus_id": 46921993 }
{ "doc_id": "14636783", "title": "COMPUTING MACHINERY AND INTELLIGENCE", "abstract": "I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think.” The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words“machine”and “think” are to be found by examining how they are commonly used it is di cult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the “imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B thus:", "sha": "02a698470f476de722e78b20d0ddd92d3781bbfc", "corpus_id": 14636783 }
{ "doc_id": "13696025", "title": "Commenting : the Task and Dataset", "abstract": "Comments of online articles provide extended views and improve user engagement. Automatically making comments thus become a valuable functionality for online forums, intelligent chatbots, etc. This paper proposes the new task of automatic article commenting, and introduces a large-scale Chinese dataset1 with millions of real comments and a humanannotated subset characterizing the comments’ varying quality. Incorporating the human bias of comment quality, we further develop automatic metrics that generalize a broad set of popular reference-based metrics and exhibit greatly improved correlations with human evaluations.", "sha": "6c76bea8342ebba9957e5ddb5ffab9a953398a90", "corpus_id": 13696025 }
{ "doc_id": "15435089", "title": "Overview of the FIRE 2016 Microblog track: Information Extraction from Microblogs Posted during Disasters", "abstract": "The FIRE 2016 Microblog track focused on retrieval of microblogs (tweets posted on Twitter) during disaster events. A collection of about 50,000 microblogs posted during a recent disaster event was made available to the participants, along with a set of seven practical information needs during a disaster situation. The task was to retrieve microblogs relevant to these needs. 10 teams participated in the task, submitting a total of 15 runs. The task resulted in comparison among performances of various microblog retrieval strategies over a benchmark collection, and brought out the challenges in microblog retrieval.", "sha": "e3efbbfca015f1b6d1a45612a42735d352bd3b79", "corpus_id": 15435089 }
{ "doc_id": "8912206", "title": "Groundhog day: near-duplicate detection on Twitter", "abstract": "With more than 340~million messages that are posted on Twitter every day, the amount of duplicate content as well as the demand for appropriate duplicate detection mechanisms is increasing tremendously. Yet there exists little research that aims at detecting near-duplicate content on microblogging platforms. We investigate the problem of near-duplicate detection on Twitter and introduce a framework that analyzes the tweets by comparing (i) syntactical characteristics, (ii) semantic similarity, and (iii) contextual information. Our framework provides different duplicate detection strategies that, among others, make use of external Web resources which are referenced from microposts. Machine learning is exploited in order to learn patterns that help identifying duplicate content. We put our duplicate detection framework into practice by integrating it into Twinder, a search engine for Twitter streams. An in-depth analysis shows that it allows Twinder to diversify search results and improve the quality of Twitter search. We conduct extensive experiments in which we (1) evaluate the quality of different strategies for detecting duplicates, (2) analyze the impact of various features on duplicate detection, (3) investigate the quality of strategies that classify to what exact level two microposts can be considered as duplicates and (4) optimize the process of identifying duplicate content on Twitter. Our results prove that semantic features which are extracted by our framework can boost the performance of detecting duplicates.", "sha": "22272f8b59ebbac76fb53a71d76b5f1dc113d1fe", "corpus_id": 8912206 }
{ "doc_id": "24563273", "title": "Machine learning or discrete choice models for car ownership demand estimation and prediction?", "abstract": "Discrete choice models are widely used to explain transportation behaviors, including a household's decision to own a car. They show how some distinct choice of human behavior or preference influences a decision. They are also used to project future demand estimates to support policy exploration. This latter use for prediction is indirectly aligned with and conditional to the model's estimation which aims to fit the observed data. In contrast, machine learning models are derived to maximize prediction accuracy through mechanisms such as out-of-sample validation, non-linear structure, and automated covariate selection, albeit at the expense of interpretability and sound behavioral theory. We investigate how machine learning models can outperform discrete choice models for prediction of car ownership using transportation household survey data from Singapore. We compare our household car ownership model (multinomial logit model) against various machine learning models (e.g. Random Forest, Support Vector Machines) by using 2008 data to derive, i.e. estimate models that we then use to predict 2012 ownership. The machine learning models are inferior to the discrete choice model when using discrete choice features. However, after engineering features more appropriate for machine learning they are superior. These results highlight both the cost of applying machine learning models in econometric contexts and an opportunity for improved prediction and better urban policy making through machine learning models with appropriate features.", "sha": "6c717a460128126d4f131acd9f8fafead6d5feec", "corpus_id": 24563273 }
{ "doc_id": "244994", "title": "Part-level fully convolutional networks for pedestrian detection", "abstract": "Since pedestrians in videos have a wide range of appearances such as body poses, occlusions, and complex backgrounds, pedestrian detection is a challengeable task. In this paper, we propose part-level fully convolutional networks (FCN) for pedestrian detection. We adopt deep learning to deal with the proposal shifting problem in pedestrian detection. First, we combine convolutional neural networks (CNN) and FCN to align bounding boxes for pedestrians. Then, we perform part-level pedestrian detection based on CNN to recall the lost body parts. Experimental results demonstrate that the proposed method achieves 6.83% performance improvement in log-average miss rate over CifarNet.", "sha": "5b0552a8e0ffdf1b6e7f2573640f888815391dec", "corpus_id": 244994 }
{ "doc_id": "15315497", "title": "Switchable Deep Network for Pedestrian Detection", "abstract": "In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.", "sha": "544519fa0794d41a04307973156016b6c679ffa5", "corpus_id": 15315497 }
{ "doc_id": "28319612", "title": "Fashioning with Networks: Neural Style Transfer to Design Clothes", "abstract": "Convolutional Neural Networks have been highly successful in performing a host of computer vision tasks such as object recognition, object detection, image segmentation and texture synthesis. In 2015, Gatys et. al [7] show how the style of a painter can be extracted from an image of the painting and applied to another normal photograph, thus recreating the photo in the style of the painter. The method has been successfully applied to a wide range of images and has since spawned multiple applications and mobile apps. In this paper, the neural style transfer algorithm is applied to fashion so as to synthesize new custom clothes. We construct an approach to personalize and generate new custom clothes based on a user’s preference and by learning the user’s fashion choices from a limited set of clothes from their closet. The approach is evaluated by analyzing the generated images of clothes and how well they align with the user’s fashion style.", "sha": "84f904a71bee129a1cf00dc97f6cdbe1011657e6", "corpus_id": 28319612 }
{ "doc_id": "14655967", "title": "Size of Multilayer Networks for Exact Learning: Analytic Approach", "abstract": "This article presents a new result about the size of a multilayer neural network computing real outputs for exact learning of a finite set of real samples. The architecture of the network is feedforward, with one hidden layer and several outputs. Starting from a fixed training set, we consider the network as a function of its weights. We derive, for a wide family of transfer functions, a lower and an upper bound on the number of hidden units for exact learning, given the size of the dataset and the dimensions of the input and output spaces.", "sha": "e24990e271779bf4549a75cb443cc1f70d4329aa", "corpus_id": 14655967 }
{ "doc_id": "10203109", "title": "On the approximate realization of continuous mappings by neural networks", "abstract": "Abstract--In this paper, we prove that any continuous mapping can be approximately realized by RumelhartHinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. The starting point of the proof for the one hidden layer case is an integral formula recently proposed by Irie-Miyake and from this, the general case (for any number of hidden layers) can be proved by induction. The two hidden layers case is proved also by using the Kolmogorov-Arnold-Sprecher theorem and this proof also gives non-trivial realizations.", "sha": "386cbc45ceb59a7abb844b5078e5c944f17723b4", "corpus_id": 10203109 }
{ "doc_id": "37020568", "title": "Fear Memory.", "abstract": "Fear memory is the best-studied form of memory. It was thoroughly investigated in the past 60 years mostly using two classical conditioning procedures (contextual fear conditioning and fear conditioning to a tone) and one instrumental procedure (one-trial inhibitory avoidance). Fear memory is formed in the hippocampus (contextual conditioning and inhibitory avoidance), in the basolateral amygdala (inhibitory avoidance), and in the lateral amygdala (conditioning to a tone). The circuitry involves, in addition, the pre- and infralimbic ventromedial prefrontal cortex, the central amygdala subnuclei, and the dentate gyrus. Fear learning models, notably inhibitory avoidance, have also been very useful for the analysis of the biochemical mechanisms of memory consolidation as a whole. These studies have capitalized on in vitro observations on long-term potentiation and other kinds of plasticity. The effect of a very large number of drugs on fear learning has been intensively studied, often as a prelude to the investigation of effects on anxiety. The extinction of fear learning involves to an extent a reversal of the flow of information in the mentioned structures and is used in the therapy of posttraumatic stress disorder and fear memories in general.", "sha": "ea4aeca016bbc1b3ca8bd04aedd95f94304c9637", "corpus_id": 37020568 }
{ "doc_id": "12432640", "title": "A coarse-to-fine approach for fast deformable object detection", "abstract": "We present a method that can dramatically accelerate object detection with part based models. The method is based on the observation that the cost of detection is likely to be dominated by the cost of matching each part to the image, and not by the cost of computing the optimal configuration of the parts as commonly assumed. Therefore accelerating detection requires minimizing the number of part-to-image comparisons. To this end we propose a multiple-resolutions hierarchical part based model and a corresponding coarse-to-fine inference procedure that recursively eliminates from the search space unpromising part placements. The method yields a ten-fold speedup over the standard dynamic programming approach and is complementary to the cascade-of-parts approach of [9]. Compared to the latter, our method does not have parameters to be determined empirically, which simplifies its use during the training of the model. Most importantly, the two techniques can be combined to obtain a very significant speedup, of two orders of magnitude in some cases. We evaluate our method extensively on the PASCAL VOC and INRIA datasets, demonstrating a very high increase in the detection speed with little degradation of the accuracy.", "sha": "02ab3d5881d0fdf9f7d1d9ab6f9caae1bd51ff6d", "corpus_id": 12432640 }
{ "doc_id": "206590483", "title": "Histograms of oriented gradients for human detection", "abstract": "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "sha": "10d6b12fa07c7c8d6c8c3f42c7f1c061c131d4c5", "corpus_id": 206590483 }
{ "doc_id": "37367255", "title": "Ecological Developmental Biology: Integrating Epigenetics, Medicine, and Evolution", "abstract": "Ecological Developmental Biology by Scott F. Gilbert and David Epel is a tremendous achievement of integration. The text discusses fields ranging from biogeography to chemical biology and diverse branches of the phylogenetic tree. It references both classic studies and the most recent primary data to illustrate the dependence of multi-cellular development on environmental context. The authors begin by establishing polyphenism, the ability of a single genotype to generate multiple phenotypes in response to environmental cues, as a common factor across plant and animal taxa. This concept is supported by well-known examples such as the temperature-dependence of sex determination in reptiles and by more exotic polyphenisms like stress-triggered cannibalism in spadefoot toad tadpoles. The authors include detailed molecular explanations for developmental phenomena and use recent epigenetic insights to explain how environment-induced phenotypes can be transmitted between generations. Also covered is the fascinating and surprisingly widespread phenomenon of developmental symbioses, wherein developmental cues are supplied by closely-associated foreign organisms. The second section discusses developmental disruptions caused by aberrant environmental signals and highlights the dramatic effects of teratogenic compounds and endocrine disruptors.Attention is drawn to the provocative hypothesis that disorders such as cancer and obesity have their roots in gene expression patterns established early in life. Although the focus shifts and the scope narrows, the theme of environmental sensitivity during development ties the book together. The final three chapters are likely to be the most controversial. Gilbert and Epel make an argument for the inclusion of organism-environment interaction alongside evolutionary developmental biology in an expanded evolutionary synthesis. In their opinion, evolutionary theory has erred in treating phenotypes as purely a consequence of genes and the environment as a passive selector between alternative phenotypes rather than a source of variation. They conclude with a preliminary attempt at integrating development, epigenetics, and the interaction between organism and environment into modern evolutionary theory. Their ideas are thoughtful and daring, if somewhat speculative. However, there can be no doubt that in this new and rapidly advancing field, detailed mechanisms will emerge to fill the gaps. One of the unusual features of Ecological Developmental Biology is the authors’ tendency to step outside the role of passive reporters and offer their own interpretations. The authors do not avoid the political implications of their ideas, especially where it concerns conservation biology and the effects of environmental contaminants on human health. In the philosophical “Coda,” which deserves a wider audience and perhaps its own book, they address how theories of development and evolution affect our understanding of human nature and the natural world and suggest that ecological developmental biology may modify the concept that evolution is a purely competitive affair. In an era in which research is advancing too fast for textbooks to keep up and the Internet is often a more convenient source of", "sha": "4525370acbd56e18849e18478adb84fb777902b4", "corpus_id": 37367255 }
{ "doc_id": "53303686", "title": "ADAPT-pricing: a dynamic and predictive technique for pricing to maximize revenue in ridesharing platforms", "abstract": "Ridesharing platforms use dynamic pricing as a means to control the network's supply and demand at different locations and times (e.g., Lyft's Prime Time and Uber's Surge Pricing) to increase revenue. These algorithms only consider the network's current supply and demand only at a ride's origin to adjust the price of the ride. In this work, we show how we can increase the platform's revenue while lowering the prices as compared to state-of-the-art algorithms, by considering the network's future demand. Furthermore, we show if rather than setting the price of a ride only based on the supply and demand at its origin, we use predictive supply and demand at both the ride's origin and destination, we can further increase the platform's overall revenue. Using a real-world data set from New York City, we show our pricing method can increase the revenue by up to 15% while reducing the price of the rides by an average of 5%. Furthermore, we show that our methods are resilient to up to 25% error in future demand prediction.", "sha": "3e04708d9fb082d26fe26ba84a55b24e8058ab37", "corpus_id": 53303686 }
{ "doc_id": "31152787", "title": "An On-line Truthful and Individually Rational Pricing Mechanism for Ride-sharing", "abstract": "Ride-sharing has the potential of addressing many socioeconomic challenges related to transportation. The rising popularity of ride-sharing platforms (e.g., Uber, Lyft, DiDi) in addition to the emergence of new applications like food delivery and grocery shopping which use a similar platform, calls for an in-depth and detailed evaluation of various aspects of this problem.\n Auction frameworks and mechanism design, have been widely used for modeling ride-sharing platforms. A key challenge in these approaches is preventing the involving parties from manipulating the platform for their personal gain which in turn, can result in a less satisfactory experience for other parties and/or loss of profit for the platform provider. We introduce a latent space transition model for ride-sharing platforms which drivers can exploit and predict the future supply of the drivers (i.e., available drivers) to their own advantage. Following, we propose a pricing model for ride-sharing platforms which is both truthful and individually rational based on Vickery auctions and show how we can manage the loss of revenue in this approach. We compare our predicting model and pricing model with competing approaches through experiments on New York City's taxi dataset. Our results show that our model can accurately learn the transition patterns of people's ride requests. Furthermore, our pricing mechanism forces drivers to be truthful and takes away any unfair advantage the drivers can achieve by bidding untruthfully. More importantly, our pricing model forces truthfulness without sacrificing much profit unlike what is typical with second-price auction schemes.", "sha": "03b320a3c04cf49a0ad8b67f4a1cce6fe0abdf75", "corpus_id": 31152787 }
{ "doc_id": "13142241", "title": "Student attendance monitoring at the university using NFC", "abstract": "There are several complex business processes in the higher education. As the number of university students has been tripled in Hungary the automation of these task become necessary. The Near Field Communication (NFC) technology provides a good opportunity to support the automated execution of several education related processes. Recently a new challenge is identified at the Budapest University of Technology and Economics. As most of the lecture notes had become available in electronic format the students especially the inexperienced freshman ones did not attend to the lectures significantly decreasing the rate of successful exams. This drove to the decision to elaborate an accurate and reliable information system for monitoring the student's attendance at the lectures. Thus we have developed a novel, NFC technology based business use case of student attendance monitoring. In order to meet the requirements of the use case we have implemented a highly autonomous distributed environment assembled by NFC enabled embedded devices, so-called contactless terminals and a scalable backoffice. Beside the opportunity of contactless card based student identification the terminals support biometric identification by fingerprint reading. These features enable the implementation of flexible and secure identification scenarios. The attendance monitoring use case has been tested in a pilot project involving about 30 access terminals and more that 1000 students. In this paper we are introducing the developed attendance monitoring use case, the implemented NFC enabled system, and the experiences gained during the pilot project.", "sha": "21ccfd05495f979afaa382341343b3aeee90fa70", "corpus_id": 13142241 }
{ "doc_id": "22703889", "title": "Towards a variable topology truss for shoring", "abstract": "This paper introduces a new type of versatile modular reconfigurable robot. This robot is a self-reconfigurable truss structure. The requirements and challenges involved to build a functioning system are explored. These include the functional requirements, and the implications on hardware design, software issues and topological analysis. One potential application for this robot is the shoring of damaged structures during search and rescue operations.", "sha": "b140b0705e86e776437101c7c64268ed9ce0e236", "corpus_id": 22703889 }
{ "doc_id": "7930918", "title": "Design of a spherical robot arm with the Spiral Zipper prismatic joint", "abstract": "A novel prismatic joint called a Spiral Zipper is used to create a 3DOF robot arm in a spherical robot configuration. The Spiral Zipper can be very compact as it has a large extension to compression ratio. An initial prototype has shown a ratio of over 14:1. The Spiral Zipper is very strong in compression, but maybe loose under tension and moments. A tether based system ensures the prismatic joint is always in compression while enabling spherical coordinate positioning with a long reach, high force, low mass design. While having typically an order magnitude higher strength to weight ratio for a given reach compared to standard industrial robot arms, the arm is slower and was not designed for high precision. These characteristics may be applicable for mounting on mobile robots or flying vehicles. This paper introduces the design and testing of several prototypes.", "sha": "ff095070575632a23a7499324e097932a59c4dcc", "corpus_id": 7930918 }
{ "doc_id": "14520834", "title": "PhTM : Phased Transactional Memory ∗", "abstract": "Hybrid transactional memory (HyTM) [3] works in today’s systems, and can use future “best effort” hardware transactional memory (HTM) support to improve performance. Best effort HTM can be substantially simpler than alternative “unbounded” HTM designs being proposed in the literature, so HyTM both supports and encourages an incremental approach to adopting HTM. We introduce Phased Transactional Memory (PhTM), which supports switching between different “phases”, each implemented by a different form of transactional memory support. This allows us to adapt between a variety of different transactional memory implementations according to the current environment and workload. We describe a simple PhTM prototype, and present experimental results showing that PhTM can match the performance and scalability of unbounded HTM implementations better than our previous HyTM prototype when best effort HTM support is available and effective, and is more competitive with state-of-the-a rt software transactional memory implementations when it is not.", "sha": "e02ec8cd92c8e687b9e343868b07e0898302c72f", "corpus_id": 14520834 }
{ "doc_id": "5277920", "title": "Learning long-range vision for autonomous off-road driving", "abstract": "Most vision-based approaches to mobile robotics suffer from the limitations imposed by stereo obstacle detection, which is short-range and prone to failure. We present a self-supervised learning process for long-range vision that is able to accurately classify complex terrain at distances up to the horizon, thus allowing superior strategic planning. The success of the learning process is due to the self-supervised training data that is generated on every frame: robust, visually consistent labels from a stereo module, normalized wide-context input windows, and a discriminative and concise feature representation. A deep hierarchical network is trained to extract informative and meaningful features from an input image, and the features are used to train a realtime classifier to predict traversability. The trained classifier sees obstacles and paths from 5 to over 100 meters, far beyond the maximum stereo range of 12 meters, and adapts very quickly to new environments. The process was developed and tested on the LAGR mobile robot. Results from a ground truth dataset are given as well as field test results.", "sha": "2d8f527d1a96b0dae209daa6a241cf3255a6ec0d", "corpus_id": 5277920 }
{ "doc_id": "2260178", "title": "Self-supervised Monocular Road Detection in Desert Terrain", "abstract": "We present a method for identifying drivable surfaces in difficult unpaved and offroad terrain conditions as encountered in the DARPA Grand Challenge robot race. Instead of relying on a static, pre-computed road appearance model, this method adjusts its model to changing environments. It achieves robustness by combining sensor information from a laser range finder, a pose estimation system and a color camera. Using the first two modalities, the system first identifies a nearby patch of drivable surface. Computer Vision then takes this patch and uses it to construct appearance models to find drivable surface outward into the far range. This information is put into a drivability map for the vehicle path planner. In addition to evaluating the method’s performance using a scoring framework run on real-world data, the system was entered, and won, the 2005 DARPA Grand Challenge. Post-race log-file analysis proved that without the Computer Vision algorithm, the vehicle would not have driven fast enough to win.", "sha": "8daa5a375e5bc10672ceee2d2f565af1849cfc55", "corpus_id": 2260178 }
{ "doc_id": "35141970", "title": "Measurements and analysis of end-to-end Internet dynamics", "abstract": "Measurements and Analysis of End-to-End Internet Dynamics by Vern Edward Paxson Doctor of Philosophy in Computer Science University of California at Berkeley Prof. Domenico Ferrari, Chair Accurately characterizing end-to-end Internet dynamics—the performance that a user actually obtains from the lengthy series of network links that comprise a path through the Internet—is exceptionally difficult, due to the network's immense heterogeneity. It can be impossible to gauge the generality of findings based on measurements of a handful of paths, yet logistically it has proven very difficult to obtain end-to-end measurements on larger scales. At the heart of our work is a “measurement framework” we devised in which a number of sites around the Internet host a specialized measurement service. By coordinating “probes” between pairs of these sites we can measure end-to-end behavior along O(N) paths for a framework consisting ofN sites. Consequently, we obtain a superlinear scaling that allows us to measure a rich cross-section of Internet behavior without requiring huge numbers of observation points. 37 sites participated in our study, allowing us to measure more than 1,000 distinct Internet paths. The first part of our work looks at the behavior of end-to-end routing: the series of routers over which a connection's packets travel. Based on 40,000 measurements made using our framework, we analyze: routing “pathologies” such as loops, outages, and flutter; the stability of routes over time; and the symmetry of routing along the two directions of an end-to-end path. We find that pathologies increased significantly over the course of 1995, indicating that, by one metric, routing degradedover the year; that Internet paths are heavily dominated by a single route, but that routing lifetimes range from seconds to many days, with most lasting for days; and that, at the end of 1995, about half of all Internet paths included a major routing asymmetry. The second part of our work studies end-to-end Internet packet dynamics. We analyze 20,000 TCP transfers of 100 Kbyte each to investigate the performance of both the TCP endpoints and the Internet paths. The measurements used for this part of our study are much richer than those for the first part, but require a great degree of attention to issues of calibration, which we address by applyingself-consistency checks to the measurements whenever possible. We find that packet filters are capable of a wide range of measurement errors, some of which, if undetected, can significantly taint subsequent analysis. We further find that network clocks exhibit adjustments and skews relative to other clocks frequently enough that a failure to detect and remove these effects will likewise pollute subsequent packet timing analysis. Using TCP transfers for our network path “measurement probes” gains a number of advantages, the chief of which is the ability to probe fine time scales without unduly loading the network. However, using TCP also requires us to accurately distinguish between connection dy-", "sha": "6a645ca226f76d162a5d66fbfc5325f8ffae135b", "corpus_id": 35141970 }
{ "doc_id": "43809677", "title": "Low Cost , High Power Factor , Dimmable , Monolithic AC-Direct LED Driver with on-Chip Step-Dimmer for Outdoor Lighting Applications", "abstract": null, "sha": "a6a875540c5c3a227ff06fdcfa684aa729b57344", "corpus_id": 43809677 }
{ "doc_id": "46152632", "title": "A novel isolated electrolytic capacitor-less single-switch AC-DC offline LED driver with power factor correction", "abstract": "Conventional AC-DC driver circuits for Light-Emitting Diode (LED) lamps require large output capacitance across the LED load to minimize the low frequency current ripple. This large capacitance is usually achieved by using an electrolytic capacitor, which has a lifetime that is at least two times less than that of a LED device. To match the potential lifetime of the LEDs, a new isolated single switch AC-DC high power factor LED driver without any electrolytic capacitors is proposed in this paper. In the proposed circuit, the energy storage capacitor is moved to the rectifier side, with a three-winding transformer used to provide isolation; power factor correction as well as to store and provide the required energy to the output. As a result, the energy storage capacitance is significantly reduced, which allows film capacitor to replace the unreliable electrolytic capacitors. The circuit's operating principles and its characteristics are described in this paper. Simulation and experimental results confirm that a power factor of 0.96 is achieved on a 120Vrms, 12W prototype.", "sha": "530867970f5cf19f5115e49292d91ccaae463ca9", "corpus_id": 46152632 }
{ "doc_id": "49417344", "title": "Improving Text-to-SQL Evaluation Methodology", "abstract": "New resources to make these evaluations easier • New Advising dataset, plus 7 existing-text-to-SQL datasets cleaned, variablized, and put into a single, standard format, with tools for easy use. • Scan above or visit https://github.com/jkkummerfeld/text2sql-data Evaluations should measure how well systems generalize to realistic unseen data. Yet standard train/test splits, which ensure that no English question is in both train and test, permit the same SQL query to appear in both. Using a simple classifier with a slot-filler as a basline, we show how the standard question-based split fails to evaluate a system’s generalizability. In addition, by analyzing properties of human-generated and automatically generated text-to-SQL datasets, we show the need to evaluate on more than one dataset to ensure systems perform well on realistic data. And we release improved resources to facilitate such evaluations.", "sha": "cf8ac1c86a4705b1dd173c5f0687b5abe1caed4a", "corpus_id": 49417344 }
{ "doc_id": "15299704", "title": "A framework of sensor-cloud integration opportunities and challenges", "abstract": "In the past few years, wireless sensor networks (WSNs) have been gaining increasing attention because of their potential of enabling of novel and attractive solutions in areas such as industrial automation, environmental monitoring, transportation business, health-care etc. If we add this collection of sensor derived data to various Web-based social networks or virtual communities, blogs etc., we can have a remarkable transformation in our ability to \"see\" ourselves and our planet. Our primary goal is to facilitate connecting sensors, people and software objects to build community-centric sensing applications. However, the computational tools needed to launch this exploration may be more appropriately built from the data center \"Cloud\" computing model than the traditional HPC approaches. In this paper, we propose a framework to enable this exploration by integrating sensor networks to the emerging data center \"cloud\" model of computing. But there are many challenges to enable this framework. We propose a pub-sub based model which simplifies the integration of sensor networks with cloud based community-centric applications. Also there is a need for internetworking cloud providers in case of violation of service level agreement with users. We discussed these issues and proposed reasonable solutions.", "sha": "bda256469290374c63e1ca4fd417eade6ff5d989", "corpus_id": 15299704 }
{ "doc_id": "16882678", "title": "Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities", "abstract": "This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision.", "sha": "855972b98b09ffb4ada4c3b933d2c848e8e72d6d", "corpus_id": 16882678 }
{ "doc_id": "8942887", "title": "Solving Asymmetric Decision Problems with Influence Diagrams", "abstract": "While influence diagrams have many ad­ vantages as a representation framework for Bayesian decision problems, they have a se­ rious drawback in handling asymmetric de­ cision problems. To be represented in an influence diagram, an asymmetric decision problem must be symmetrized. A consid­ erable amount of unnecessary computation may be involved when a symmetrized influ­ ence diagram is evaluated by conventional al­ gorithms. In this paper we present an ap­ proach for avoiding such unnecessary compu­ tation in influence diagram evaluation.", "sha": "603a0ea86aff898457c7d1a27f8614145613107c", "corpus_id": 8942887 }
{ "doc_id": "19980219", "title": "Medical evaluation of suspected child sexual abuse: 2011 update.", "abstract": "The medical evaluation of children with suspected sexual abuse includes more than just the physical examination of the child. The importance of taking a detailed medical history from the parents and a history from the child about physical sensations following sexual contact has been emphasized in other articles in the medical literature. The examination is important, however, and medical providers who examine children need to be aware of published research on findings in nonabused children, studies of healing of injuries, and studies documenting the association between sexual contact and the diagnosis of sexually transmissible infections in children. This article reviews the current approach to interpreting findings in children who may have been sexually abused and why additional research is needed.", "sha": "0828ab62dc72e9d8f813657c5250639470051e46", "corpus_id": 19980219 }
{ "doc_id": "7466409", "title": "Anogenital injuries in child pedestrians run over by low-speed motor vehicles: four cases with findings that mimic child sexual abuse.", "abstract": "OBJECTIVE\nDifferentiating between child sexual abuse and nonintentional causes of anogenital injury can be challenging, and a misdiagnosis can have a profound impact on the child and family. This case series documents an important nonintentional mechanism of anogenital injury that mimics the physical findings of child sexual abuse.\n\n\nMETHODS\nFour children were examined after being run over by a slow-moving motor vehicle. In each case, the wheel of the vehicle passed longitudinally over the child's torso.\n\n\nRESULTS\nTwo children had perianal lacerations, and 2 had hymenal lacerations. One child with hymen injuries was followed for 4 weeks and developed findings identical to those seen in healed sexual abuse.\n\n\nCONCLUSIONS\nChildren run over by motor vehicles should be evaluated for anogenital injury. If such injury is suspected, it should be fully delineated and documented with colposcopy and follow-up examination. Although the possibility of sexual abuse must be considered, awareness of the occurrence of anogenital injuries in children run over by motor vehicles may prevent the misdiagnosis of acute sexual abuse in children. Conversely, children presenting for evaluation of acute or past sexual abuse should be questioned as to whether they were ever run over by a motor vehicle.", "sha": "2a942f89ff1ba8236c016372be2ec75a934bdd94", "corpus_id": 7466409 }
{ "doc_id": "16941047", "title": "Model-Driven ERP Implementation", "abstract": "Enterprise Resource Planning (ERP) implementations are very complex. To obtain a fair level of understanding of the system, it is then necessary to model the supported business processes. However, the problem is the accuracy of the mapping between this model and the actual technical implementation. A solution is to make use of the OMG’s Model-Driven Architecture (MDA) framework. In fact, this framework lets the developer model his system at a high abstraction level and allows the MDA tool to generate the implementation details. This paper presents our results in applying the MDA framework to ERP implementation based on a high level model of the business processes. Then, we show how our prototype is structured and implemented in the IBM/Rational XDE environment", "sha": "be8d148b7f20690c3cc08d516d9ad6d9316311a1", "corpus_id": 16941047 }
{ "doc_id": "13584312", "title": "The Impact of Internet of Things unification with Project Management Disciplines in project-based organizations", "abstract": null, "sha": "addb6315566238bbcad4953300695c8a56b901a4", "corpus_id": 13584312 }
{ "doc_id": "43291586", "title": "Internet of Things ( IoT ) : A Literature Review", "abstract": "One of the buzzwords in the Information Technology is Internet of Things (IoT). The future is Internet of Things, which will transform the real world objects into intelligent virtual objects. The IoT aims to unify everything in our world under a common infrastructure, giving us not only control of things around us, but also keeping us informed of the state of the things. In Light of this, present study addresses IoT concepts through systematic review of scholarly research papers, corporate white papers, professional discussions with experts and online databases. Moreover this research article focuses on definitions, geneses, basic requirements, characteristics and aliases of Internet of Things. The main objective of this paper is to provide an overview of Internet of Things, architectures, and vital technologies and their usages in our daily life. However, this manuscript will give good comprehension for the new researchers, who want to do research in this field of Internet of Things (Technological GOD) and facilitate knowledge accumulation in efficiently.", "sha": "ac53687795aea4a0f22dc28096b686ab2d225baf", "corpus_id": 43291586 }
{ "doc_id": "10119665", "title": "CNN-based sensor fusion techniques for multimodal human activity recognition", "abstract": "Deep learning (DL) methods receive increasing attention within the field of human activity recognition (HAR) due to their success in other machine learning domains. Nonetheless, a direct transfer of these methods is often not possible due to domain specific challenges (e.g. handling of multi-modal sensor data, lack of large labeled datasets). In this paper, we address three key aspects for the future development of robust DL methods for HAR: (1) Is it beneficial to apply data specific normalization? (2) How to optimally fuse multimodal sensor data? (3) How robust are these approaches with respect to available training data? We evaluate convolutional neuronal networks (CNNs) on a new large real-world multimodal dataset (RBK) as well as the PAMAP2 dataset. Our results indicate that sensor specific normalization techniques are required. We present a novel pressure specific normalization method which increases the F1-score by ∼ 4.5 percentage points (pp) on the RBK dataset. Further, we show that late- and hybrid fusion techniques are superior compared to early fusion techniques, increasing the F1-score by up to 3.5 pp (RBK dataset). Finally, our results reveal that in particular CNNs based on a shared filter approach have a smaller dependency on the amount of available training data compared to other fusion techniques.", "sha": "d589eb854289015f81c04ae1990b43ee91467b63", "corpus_id": 10119665 }
{ "doc_id": "1967279", "title": "Bayesian Unsupervised Topic Segmentation", "abstract": "This paper describes a novel Bayesian approach to unsupervised topic segmentation. Unsupervised systems for this task are driven by lexical cohesion: the tendency of wellformed segments to induce a compact and consistent lexical distribution. We show that lexical cohesion can be placed in a Bayesian context by modeling the words in each topic segment as draws from a multinomial language model associated with the segment; maximizing the observation likelihood in such a model yields a lexically-cohesive segmentation. This contrasts with previous approaches, which relied on hand-crafted cohesion metrics. The Bayesian framework provides a principled way to incorporate additional features such as cue phrases, a powerful indicator of discourse structure that has not been previously used in unsupervised segmentation systems. Our model yields consistent improvements over an array of state-of-the-art systems on both text and speech datasets. We also show that both an entropy-based analysis and a well-known previous technique can be derived as special cases of the Bayesian framework.1", "sha": "410bc41c4372fd78a68cd08ad39f2a2c5156fc52", "corpus_id": 1967279 }
{ "doc_id": "3177797", "title": "Latent Dirichlet Allocation", "abstract": "We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hofmann's aspect model , also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.", "sha": "1e56ed3d2c855f848ffd91baa90f661772a279e1", "corpus_id": 3177797 }
{ "doc_id": "3230557", "title": "The Hallmarks of Aging", "abstract": "Aging is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. This deterioration is the primary risk factor for major human pathologies, including cancer, diabetes, cardiovascular disorders, and neurodegenerative diseases. Aging research has experienced an unprecedented advance over recent years, particularly with the discovery that the rate of aging is controlled, at least to some extent, by genetic pathways and biochemical processes conserved in evolution. This Review enumerates nine tentative hallmarks that represent common denominators of aging in different organisms, with special emphasis on mammalian aging. These hallmarks are: genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. A major challenge is to dissect the interconnectedness between the candidate hallmarks and their relative contributions to aging, with the final goal of identifying pharmaceutical targets to improve human health during aging, with minimal side effects.", "sha": "7261469291ba8a9fecf4c1f4f577a555fe01a708", "corpus_id": 3230557 }
{ "doc_id": "18207877", "title": "Privacy preserving data mining using image slicing and visual cryptography", "abstract": "Slicing geometry (image) into different frames and randomizing them can help in hiding a secret message (key) within the image and also preserve privacy. Authenticated users can use the relevant key and extract the original image at its destination. This field termed as visual cryptography or steganography is one of the most secure forms of message transfer available today and is highly suited for image files. The scope of this paper is to combine steganography and cryptography properties in such a way to make it harder to retrieve the image of the secret message. In this work, the image is sliced and randomized, the stored random slices upon match with a search string (query) the original image is retrieved. Multithread approach is combined in the searching process to reduce the time complexity of the search algorithm. This method is particularly advantageous to store large number of records, categorized into different groups, and each group indexed with specific keys and when used at the receiving side the particular group image(s) becomes accessible.", "sha": "4ca915a1d759cc3aec31bcf49976cb11d6b519b5", "corpus_id": 18207877 }
{ "doc_id": "207676583", "title": "Privacy-Preserving Data Mining", "abstract": "A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data.", "sha": "19a09658e6c05b44136baf54571a884eb1a7b52e", "corpus_id": 207676583 }
{ "doc_id": "13940692", "title": "Global non-rigid alignment of 3-D scans", "abstract": "A key challenge in reconstructing high-quality 3D scans is registering data from different viewpoints. Existing global (multiview) alignment algorithms are restricted to rigid-body transformations, and cannot adequately handle non-rigid warps frequently present in real-world datasets. Moreover, algorithms that can compensate for such warps between pairs of scans do not easily generalize to the multiview case. We present an algorithm for obtaining a globally optimal alignment of multiple overlapping datasets in the presence of low-frequency non-rigid deformations, such as those caused by device nonlinearities or calibration error. The process first obtains sparse correspondences between views using a locally weighted, stability-guaranteeing variant of iterative closest points (ICP). Global positions for feature points are found using a relaxation method, and the scans are warped to their final positions using thin-plate splines. Our framework efficiently handles large datasets---thousands of scans comprising hundreds of millions of samples---for both rigid and non-rigid alignment, with the non-rigid case requiring little overhead beyond rigid-body alignment. We demonstrate that, relative to rigid-body registration, it improves the quality of alignment and better preserves detail in 3D datasets from a variety of scanners exhibiting non-rigid distortion.", "sha": "2b039125f0dd13d0cc03fea5534bec7dd805e048", "corpus_id": 13940692 }
{ "doc_id": "45563715", "title": "IoT-based personal health care monitoring device for diabetic patients", "abstract": "This paper presents a non-invasive breath test to monitor the condition of diabetic patients. It is identified as an easier technique and quick diagnoses of diabetic ketoacidosis (DKA). DKA is a preventable acute complication of type 1 diabetes mellitus. Common diabetic test on patients are done on urinary test and blood ketone test to monitor for diabetes condition. However, those methods are considers as invasive, inconvenient and expensive. Recently, breath acetone has been considered as a new ketone biomarker because it is non-invasive, convenient, and accurate reflection of the body's ketone level. This research presents a method of monitoring ketone level by using breath measurement. Main objective of this research is to present an easy handheld health care on monitoring diabetic level with breath. Method consists of development of hardware connection with Internet of Things (IoT) system to facilitate the process of patients' diagnosis and personal monitoring. In this system, Arduino board is used to read the sensor with sense the breath. Breath value level is log to system using wireless communication. Data collection is interfaced to web page. Ketone level is measured as the amount of breath acetone is collected when patients exhale into a mouthpiece that consists of gas sensor. The reading from Arduino is shared to the database via ESP 8266 Wi-Fi Module and can be accessed by the patients or registered doctors. This research is significant where patients can independently monitor their diabetic health and the IoT system can be alerted directly to medial officers in the hospitals.", "sha": "c1c12d9c0f7032a14bf42b8ed04c74e5ea7f1e89", "corpus_id": 45563715 }
{ "doc_id": "31476596", "title": "Breath Acetone-based Non-invasive Detection of Blood Glucose Levels", "abstract": "There has been a constant demand for the development of non-invasive, sensitive glucose sensor system that offers fast and real-time electronic readout of blood glucose levels. In this article, we propose a new system for detecting blood glucose levels by estimating the concentration of acetone in the exhaled breath. A TGS822 tin oxide (SnO2) sensor has been used to detect the concentration of acetone in the exhaled air. Acetone in exhaled breath showed a correlation with the blood glucose levels. Effects of pressure, temperature and humidity have been considered. Artificial Neural Network (ANN) has been used to extract features from the output waveform of the sensors. The system has been trained and tested with patient data in the blood glucose ranges from 80 mg/dl to 180 mg/dl. Using the proposed system, the blood glucose concentration has been estimated within an error limit of ±7.5 mg/dl.", "sha": "c0fa1cffe8e7d3be38b510fe4e92807248ff2363", "corpus_id": 31476596 }
{ "doc_id": "23682936", "title": "Lane Detection Method Based on Improved RANSAC Algorithm", "abstract": "Lane detection based on computer vision is a key technology of Automatic Drive System for intelligent vehicles. In this paper, we propose a real-time and efficient lane detection algorithm that can detect lanes appearing in urban streets and highway roads under complex background. In order to enhance lane boundary information and to be suitable for various light conditions, we adopt canny algorithm for edge detection to get good feature points. We use the generalized curve lane parameter model, which can describe both straight and curved lanes. We propose an improved random sample consensus (RANSAC) algorithm combined with the least squares technique to estimate lane model parameters based on feature extraction. Experiments are conducted on both real road lane videos captured by Tongji University and Caltech Lane Datasets. The experimental results show that our algorithm is can meet the real time requirement and fit lane boundaries well in various challenging road conditions.", "sha": "2d13ba4e61d28e8a5234e11a5f281a89b89a2db8", "corpus_id": 23682936 }
{ "doc_id": "30710994", "title": "Time-domain neural network: A 48.5 TSOp/s/W neuromorphic chip optimized for deep learning and CMOS technology", "abstract": "Demand for highly energy-efficient hardware for the inference computation of deep neural networks is increasing. Ultimately, fully spatially unrolled architecture where each distributed weight memory has a processing element (PE) for its exclusive use is the most energy-efficient solution because i) it can completely eliminate the energy-hungry data moving for weight fetching, and ii) PEs can consist only of combinational logics generally consuming less power than flip-flops. However, this strategy has not been applied because it requires a prohibitively huge amount of both area and hardware resources. We propose TDNN, which enables the fully spatially unrolled architecture by using 3D stacked ReRAM and the time-domain analog-digital mixed-signal processing that uses delay time as signal. In TDNN, a PE that performs synaptic operation is composed of only 12 logic transistors, which are equivalent to 3 gates. The proof-of-concept chip with SRAM instead of ReRAM shows unprecedentedly high energy efficiency of 48.2 TSop/s/W.", "sha": "d2e0b73f4e2542e687a59c585c54f69a1273f14a", "corpus_id": 30710994 }
{ "doc_id": "2413762", "title": "Origami: A 803-GOp/s/W Convolutional Network Accelerator", "abstract": "An ever-increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow, and super resolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design, and implementation, as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power, area, and I/O efficiency. The manufactured device provides up to 196 GOp/s on 3.09 $\\text {mm}^{2}$ of silicon in UMC 65-nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.", "sha": "21d7130230162af2a4cc1b9375bfe9b37dbbd499", "corpus_id": 2413762 }
{ "doc_id": "9235968", "title": "Presentation of Acute Megakaryoblastic Leukemia Associated with a GATA-1 Mutation Mimicking the Eruption of Transient Myeloproliferative Disorder.", "abstract": "Children with trisomy 21 are prone to developing hematologic disorders, including transient myeloproliferative disorder (TMD) and acute megakaryoblastic leukemia (AMKL). The papulovesicular eruption of TMD provides an important clue to the diagnosis. In contrast, AMKL rarely has associated cutaneous findings. We report the case of a 22-month-old child with trisomy 21 who presented with the acute onset of diffusely scattered and crusted papules, plaques, and vesicles. A thorough infectious evaluation was negative and the patient was unresponsive to empiric antibiotic and antiinflammatory therapies. Complete blood count (CBC) was notable for mild pancytopenia, with a normal peripheral smear. Two weeks later he was reassessed and found to have a population of blasts on repeat CBC. Subsequent evaluation ultimately led to a diagnosis of AMKL. This is the first reported case of a cutaneous eruption in a young child with Down syndrome and transformed AMKL. When children with trisomy 21 present with the acute onset of crusted papules and vesicles that cannot be accounted for by an infectious etiology, a diagnosis of AMKL should be considered even in the absence of a history of TMD.", "sha": "b9922ae520b55edd2d05ee45fcd4e1b295b8d8c7", "corpus_id": 9235968 }
{ "doc_id": "5036816", "title": "Unsupervised Salient Object Detection via Inferring From Imperfect Saliency Models", "abstract": "Visual saliency detection has become an active research direction in recent years. A large number of saliency models, which can automatically locate objects of interest in images, have been developed. As these models take advantage of different kinds of prior assumptions, image features, and computational methodologies, they have their own strengths and weaknesses and may cope with only one or a few types of images well. Inspired by these facts, this paper proposes a novel salient object detection approach with the idea of inferring a superior model from a variety of previous imperfect saliency models via optimally leveraging the complementary information among them. The proposed approach mainly consists of three steps. First, a number of existing unsupervised saliency models are adopted to provide weak/imperfect saliency predictions for each region in the image. Then, a fusion strategy is used to fuse each image region's weak saliency predictions into a strong one by simultaneously considering the performance differences among various weak predictions and various characteristics of different image regions. Finally, a local spatial consistency constraint that ensures high similarity of the saliency labels for neighboring image regions with similar features is proposed to refine the results. Comprehensive experiments on five public benchmark datasets and comparisons with a number of state-of-the-art approaches can demonstrate the effectiveness of the proposed work.", "sha": "dd35905467c9908efce6c0948a14022e4693a4bc", "corpus_id": 5036816 }
{ "doc_id": "6675391", "title": "Saliency Detection via Absorbing Markov Chain", "abstract": "In this paper, we formulate saliency detection via absorbing Markov chain on an image graph model. We jointly consider the appearance divergence and spatial distribution of salient objects and the background. The virtual boundary nodes are chosen as the absorbing nodes in a Markov chain and the absorbed time from each transient node to boundary absorbing nodes is computed. The absorbed time of transient node measures its global similarity with all absorbing nodes, and thus salient objects can be consistently separated from the background when the absorbed time is used as a metric. Since the time from transient node to absorbing nodes relies on the weights on the path and their spatial distance, the background region on the center of image may be salient. We further exploit the equilibrium distribution in an ergodic Markov chain to reduce the absorbed time in the long-range smooth background regions. Extensive experiments on four benchmark datasets demonstrate robustness and efficiency of the proposed method against the state-of-the-art methods.", "sha": "2d3b73950a4361b3b46af4e1c84bc97be7753217", "corpus_id": 6675391 }
{ "doc_id": "205415001", "title": "A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing.", "abstract": "The brain is capable of massively parallel information processing while consuming only ∼1-100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ∼1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.", "sha": "dcc439a8251364a617a7379d9d9cbbcfe80762a9", "corpus_id": 205415001 }