src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the userlocations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
A location-based and preference-aware travel recommendation system is presented by the authors of REF by using a weighted category hierarchy to model each individual's personal preferences, from learning an iterative learning model in their offline module.
15040876
Location-based and preference-aware recommendation using sparse geo-social networking data
{ "venue": "SIGSPATIAL '12", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We present an indoor positioning system that measures location using disturbances of the Earth's magnetic field caused by structural steel elements in a building. The presence of these large steel members warps the geomagnetic field in a way that is spatially varying but temporally stable. To localize, we measure the magnetic field using an array of e-compasses and compare the measurement with a previously obtained magnetic map. We demonstrate accuracy within 1 meter 88% of the time in experiments in two buildings and across multiple floors within the buildings. We discuss several constraint techniques that can maintain accuracy as the sample space increases.
In REF an indoor localization method that exploits the Earth's magnetic field disturbances is proposed.
6770567
Indoor location sensing using geo-magnetism
{ "venue": "MobiSys '11", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The skyline of a d -dimensional dataset contains the points that are not dominated by any other point on all dimensions. Skyline computation has recently received considerable attention in the database community, especially for progressive methods that can quickly return the initial results without reading the entire database. All the existing algorithms, however, have some serious shortcomings which limit their applicability in practice. In this article we develop branch-andbound skyline (BBS), an algorithm based on nearest-neighbor search, which is I/O optimal, that is, it performs a single access only to those nodes that may contain skyline points. BBS is simple to implement and supports all types of progressive processing (e.g., user preferences, arbitrary dimensionality, etc). Furthermore, we propose several interesting variations of skyline computation, and show how BBS can be applied for their efficient processing.
In REF , the Branch and Bound Skyline (BBS) algorithm is proposed, this algorithm is based on nearest-neighbor search and only nodes containing skyline points are accessed.
207156711
Progressive skyline computation in database systems
{ "venue": "TODS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract: Grid computing is a high performance computing environment to solve larger scale computational demands. Grid computing contains resource management, task scheduling, security problems, information management and so on. Task scheduling is a fundamental issue in achieving high performance in grid computing systems. However, it is a big challenge for efficient scheduling algorithm design and implementation. In this paper, a heuristic approach based on particle swarm optimization algorithm is adopted to solving task scheduling problem in grid environment. Each particle is represented a possible solution, and the position vector is transformed from the continuous variable to the discrete variable. This approach aims to generate an optimal schedule so as to get the minimum completion time while completing the tasks. The results of simulated experiments show that the particle swarm optimization algorithm is able to get the better schedule than genetic algorithm.
Zhang and chen REF represented a heuristic approach based on Particle Swarm Algorithm is adopted to solving job scheduling problem in grid environment.
15895005
A Task Scheduling Algorithm Based on PSO for Grid Computing
{ "venue": null, "journal": "International Journal of Computational Intelligence Research", "mag_field_of_study": [ "Computer Science" ] }
Abstract. One of the promises of the service-oriented architecture (SOA) is that complex services can be easily composed using individual services from various service providers. Individual services can be selected and integrated either statically or dynamically based on the service functionalities and performance constraints. For many distributed applications, the runtime performance (e.g. end-to-end delay, overall cost, service reliability and availability) of complex services are very important. In our earlier work, we have studied the service selection problem for complex services with only one QoS constraint. This paper extends the service selection problem to multiple QoS constraints. The problem can be modelled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as the multi-dimension multi-choice 0-1 knapsack problem (MMKP). The graph model defines the problem as the multi-constraint optimal path (MCOP) problem. We propose algorithms for both models and study their performances by test cases. We also compare the pros & cons between the two models.
The graph model, on the other hand, is based on the algorithm proposed as a solution to the multi-constraint optimal path problem REF .
1794794
Service Selection Algorithms for Composing Complex Services with Multiple QoS Constraints
{ "venue": "ICSOC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Social networks such as Facebook, LinkedIn, and Twitter have been a crucial source of information for a wide spectrum of users. In Twitter, popular information that is deemed important by the community propagates through the network. Studying the characteristics of content in the messages becomes important for a number of tasks, such as breaking news detection, personalized message recommendation, friends recommendation, sentiment analysis and others. While many researchers wish to use standard text mining tools to understand messages on Twitter, the restricted length of those messages prevents them from being employed to their full potential. We address the problem of using standard topic models in microblogging environments by studying how the models can be trained on the dataset. We propose several schemes to train a standard topic model and compare their quality and effectiveness through a set of carefully designed experiments from both qualitative and quantitative perspectives. We show that by training a topic model on aggregated messages we can obtain a higher quality of learned model which results in significantly better performance in two realworld classification problems. We also discuss how the state-ofthe-art Author-Topic model fails to model hierarchical relationships between entities in Social Media.
REF focused on making topic models in Twitter.
14633992
Empirical study of topic modeling in Twitter
{ "venue": "SOMA '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require handcrafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domainspecific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.
REF proposes a model to predict the next sentence given the previous sentences in a dialogue session.
12300158
A Neural Conversational Model
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Human users find difficult to remember long cryptographic keys. Therefore, researchers, for a long time period, have been investigating ways to use biometric features of the user rather than memorable password or passphrase, in an attempt to produce tough and repeatable cryptographic keys. Our goal is to integrate the volatility of the user's biometric features into the generated key, so as to construct the key unpredictable to a hacker who is deficient of important knowledge about the user's biometrics. In our earlier research, we have incorporated multiple biometric modalities into the cryptographic key generation to provide better security. In this paper, we propose an efficient approach based on multimodal biometrics (Iris and fingerprint) for generating a secure cryptographic key, where the security is further enhanced with the difficulty of factoring large numbers. At first, the features, minutiae points and texture properties are extracted from the fingerprint and iris images respectively. Then, the extracted features are fused at the feature level to obtain the multi-biometric template. Finally, a multi-biometric template is used for generating a 256-bit cryptographic key. For experimentation, we have used the fingerprint images obtained from publicly available sources and the iris images from CASIA Iris Database. The experimental results have showed that the generated 256-bit cryptographic key is capable of providing better user authentication and better security.
REF proposed a method to generate a 256-bit secure cryptographic key from the multi-biometrics template.
11667742
Cryptographic Key Generation from Multiple Biometric Modalities: Fusing Minutiae with Iris Feature
{ "venue": null, "journal": "International Journal of Computer Applications", "mag_field_of_study": [ "Computer Science" ] }
Abstract-We present a new solution for real-time head pose estimation. The key to our method is a model-based approach based on the fusion of color and time-of-flight depth data. Our method has several advantages over existing head-pose estimation solutions. It requires no initial setup or knowledge of a pre-built model or training data. The use of additional depth data leads to a robust solution, while maintaining real-time performance. The method outperforms the state-of-the art in several experiments using extreme situations such as sudden changes in lighting, large rotations, and fast motion.
REF presented a solution for real time head pose estimation based on the fusion of color and time-of-flight depth data.
16431710
Robust head pose estimation by fusing time-of-flight depth and color
{ "venue": "2010 IEEE International Workshop on Multimedia Signal Processing", "journal": "2010 IEEE International Workshop on Multimedia Signal Processing", "mag_field_of_study": [ "Computer Science" ] }
Abstract-We describe Instruction-Set Randomization (ISR), a general approach for safeguarding systems against any type of code-injection attack. We apply Kerckhoffs' principle to create OS process-specific randomized instruction sets (e.g., machine instructions) of the system executing potentially vulnerable software. An attacker who does not know the key to the randomization algorithm will inject code that is invalid for that (randomized) environment, causing a runtime exception. Our approach is applicable to machine-language programs and scripting and interpreted languages. We discuss three approaches (protection for Intel x86 executables, Perl scripts, and SQL queries), one from each of the above categories. Our goal is to demonstrate the generality and applicability of ISR as a protection mechanism. Our emulator-based prototype demonstrates the feasibility ISR for x86 executables and should be directly usable on a suitably modified processor. We demonstrate how to mitigate the significant performance impact of emulation-based ISR by using several heuristics to limit the scope of randomized (and interpreted) execution to sections of code that may be more susceptible to exploitation. The SQL prototype consists of an SQL query-randomizing proxy that protects against SQL injection attacks with no changes to database servers, minor changes to CGI scripts, and with negligible performance overhead. Similarly, the performance penalty of a randomized Perl interpreter is minimal. Where the performance impact of our proposed approach is acceptable (i.e., in an already-emulated environment, in the presence of programmable or specialized hardware, or in interpreted languages), it can serve as a broad protection mechanism and complement other security mechanisms.
They also demonstrate the applicability of the approach on interpreted languages such as Perl, and later SQL REF .
2121523
On the General Applicability of Instruction-Set Randomization
{ "venue": "IEEE Transactions on Dependable and Secure Computing", "journal": "IEEE Transactions on Dependable and Secure Computing", "mag_field_of_study": [ "Computer Science" ] }
In this paper we present a novel approach to minimally supervised synonym extraction. The approach is based on the word embeddings and aims at presenting a method for synonym extraction that is extensible to various languages. We report experiments with word vectors trained by using both the continuous bag-of-words model (CBoW) and the skip-gram model (SG) investigating the effects of different settings with respect to the contextual window size, the number of dimensions and the type of word vectors. We analyze the word categories that are (cosine) similar in the vector space, showing that cosine similarity on its own is a bad indicator to determine if two words are synonymous. In this context, we propose a new measure, relative cosine similarity, for calculating similarity relative to other cosine-similar words in the corpus. We show that calculating similarity relative to other words boosts the precision of the extraction. We also experiment with combining similarity scores from differently-trained vectors and explore the advantages of using a part-of-speech tagger as a way of introducing some light supervision, thus aiding extraction. We perform both intrinsic and extrinsic evaluation on our final system: intrinsic evaluation is carried out manually by two human evaluators and we use the output of our system in a machine translation task for extrinsic evaluation, showing that the extracted synonyms improve the evaluation metric.
More recently, REF proposed minimally supervised synonym extraction approach based on neural word embeddings that are compiled using continuous bag-of-words model (CBoW) and the skip-gram model (SG).
15090280
A Minimally Supervised Approach for Synonym Extraction with Word Embeddings
{ "venue": null, "journal": "The Prague Bulletin of Mathematical Linguistics", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2 2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance ( ) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.
REF proposes a specially designed CNN architecture for the classification of ILD patterns.
206749561
Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network
{ "venue": "IEEE Transactions on Medical Imaging", "journal": "IEEE Transactions on Medical Imaging", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract-Resilience to packet loss is a critical requirement in predictive video coding for transmission over packet-switched networks, since the prediction loop propagates errors and causes substantial degradation in video quality. This work proposes an algorithm to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. The method recursively computes the total decoder distortion at pixel level precision to accurately account for spatial and temporal error propagation. The accuracy of the estimate is demonstrated via simulation results. The estimate is integrated into a rate-distortion (RD)-based framework for optimal switching between intra-coding and inter-coding modes per macroblock. The cost in computational complexity is modest. The framework is further extended to optimally exploit feedback/acknowledgment information from the receiver/network. Simulation results both with and without a feedback channel demonstrate that precise distortion estimation enables the coder to achieve substantial and consistent gains in PSNR over known state-of-the-art RD-and non-RDbased mode switching methods.
The work in REF proposes an algorithm to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment.
1408385
Video coding with optimal inter/intra-mode switching for packet loss resilience
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": "IEEE Journal on Selected Areas in Communications", "mag_field_of_study": [ "Computer Science" ] }
We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relationextraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.
REF reduced relation extraction to answering simple reading comprehension questions.
793385
Zero-Shot Relation Extraction via Reading Comprehension
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
The KNOWITALL system aims to automate the tedious process of extracting large collections of facts (e.g., names of scientists or politicians) from the Web in an unsupervised, domain-independent, and scalable manner. The paper presents an overview of KNOW-ITALL's novel architecture and design principles, emphasizing its distinctive ability to extract information without any hand-labeled training examples. In its first major run, KNOW-ITALL extracted over 50,000 facts, but suggested a challenge: How can we improve KNOW-ITALL's recall and extraction rate without sacrificing precision? This paper presents three distinct ways to address this challenge and evaluates their performance. Pattern Learning learns domain-specific extraction rules, which enable additional extractions. Subclass Extraction automatically identifies sub-classes in order to boost recall. List Extraction locates lists of class instances, learns a "wrapper" for each list, and extracts elements of each list. Since each method bootstraps from KNOWITALL's domainindependent methods, the methods also obviate hand-labeled training examples. The paper reports on experiments, focused on named-entity extraction, that measure the relative efficacy of each method and demonstrate their synergy. In concert, our methods gave KNOW-ITALL a 4-fold to 8-fold increase in recall, while maintaining high precision, and discovered over 10,000 cities missing from the Tipster Gazetteer.
Among recent work on this topic, is worth mentioning REF , where the authors introduced KnowItAll, a system which is able to extract information from the Web without hand-labeled training examples.
7162988
Unsupervised Named-Entity Extraction from the Web: An Experimental Study
{ "venue": "ARTIFICIAL INTELLIGENCE", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We exhibit a strong link between frequentist PAC-Bayesian bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam's razor criteria, under the assumption that the data is generated by a i.i.d. distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-Gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.
PAC-Bayesian bounds for the NLL loss function are intimately related to learning Bayesian inference REF .
930133
PAC-Bayesian Theory Meets Bayesian Inference
{ "venue": "Advances in Neural Information Processing Systems 29 (NIPS 2016), p. 1884-1892", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-A Quality-of-Service (QoS) routing protocol is developed for mobile ad hoc networks. It can establish QoS routes with reserved bandwidth on a per flow basis in a network employing TDMA. An efficient algorithm for calculating the end-to-end bandwidth on a path is developed and used together with the route discovery mechanism of AODV to setup QoS routes. In our simulations the QoS routing protocol produces higher throughput and lower delay than its best-effort counterpart.
Chenxi and Corson REF have developed a QoS routing protocol for ad hoc networks using TDMA.
5543407
QoS routing for mobile ad hoc networks
{ "venue": "Proceedings.Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies", "journal": "Proceedings.Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies", "mag_field_of_study": [ "Computer Science" ] }
We introduce the concept of dynamic image, a novel compact representation of videos useful for video analysis especially when convolutional neural networks (CNNs) are used. The dynamic image is based on the rank pooling concept and is obtained through the parameters of a ranking machine that encodes the temporal evolution of the frames of the video. Dynamic images are obtained by directly applying rank pooling on the raw image pixels of a video producing a single RGB image per video. This idea is simple but powerful as it enables the use of existing CNN models directly on video data with fine-tuning. We present an efficient and effective approximate rank pooling operator, speeding it up orders of magnitude compared to rank pooling. Our new approximate rank pooling CNN layer allows us to generalize dynamic images to dynamic feature maps and we demonstrate the power of our new representations on standard benchmarks in action recognition achieving state-of-the-art performance.
Bilen et al. REF introduced the dynamic image network to generate dynamic images for action videos.
474607
Dynamic Image Networks for Action Recognition
{ "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Peer-to-peer communication has been recently considered as a popular issue for local area services. An innovative resource allocation scheme is proposed to improve the performance of mobile peer-to-peer, i.e., device-to-device (D2D), communications as an underlay in the downlink (DL) cellular networks. To optimize the system sum rate over the resource sharing of both D2D and cellular modes, we introduce a reverse iterative combinatorial auction as the allocation mechanism. In the auction, all the spectrum resources are considered as a set of resource units, which as bidders compete to obtain business while the packages of the D2D pairs are auctioned off as goods in each auction round. We first formulate the valuation of each resource unit, as a basis of the proposed auction. And then a detailed non-monotonic descending price auction algorithm is explained depending on the utility function that accounts for the channel gain from D2D and the costs for the system. Further, we prove that the proposed auction-based scheme is cheat-proof, and converges in a finite number of iteration rounds. We explain non-monotonicity in the price update process and show lower complexity compared to a traditional combinatorial allocation. The simulation results demonstrate that the algorithm efficiently leads to a good performance on the system sum rate.
In REF , the authors addressed the downlink resource allocation problem for the users in D2D and cellular communications, by proposing a reverse iterative based combinatorial auction approach.
13233075
Efficiency Resource Allocation for Device-to-Device Underlay Communication Systems: A Reverse Iterative Combinatorial Auction Based Approach
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
We propose SWAN, a stateless network model which uses distributed control algorithms to deliver service differentiation in mobile wireless ad hoc networks in a simple, scalable and robust manner. The proposed architecture is designed to handle both realtime UDP traffic, and best effort UDP and TCP traffic without the need for the introduction and management of per-flow state information in the network. SWAN supports per-hop and end-to-end control algorithms that primarily rely on the efficient operation of TCP/IP protocols. In particular, SWAN uses local rate control for best-effort traffic, and sender-based admission control for real-time UDP traffic. Explicit congestion notification (ECN) is used to dynamically regulate admitted real-time sessions in the face of network dynamics brought on by mobility or traffic overload conditions. SWAN does not require the support of a QOS-capable MAC to deliver service differentiation. Rather, real-time services are built using existing best effort wireless MAC technology. Simulation, analysis, and results from an experimental wireless testbed show that real-time applications experience low and stable delays under various multihop, traffic, and mobility conditions. Index Terms-Service differentiation, quality of service, wireless ad hoc networks. ae 192
The work REF proposes SWAN, a stateless network model which uses distributed control algorithms to deliver service differentiation.
9235640
Supporting service differentiation for real-time and best-effort traffic in stateless wireless ad hoc networks (SWAN
{ "venue": "IEEE Transactions on Mobile Computing", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
REF conduct training on both real images and synthetic images mapped into the domain of realistic images using a GAN for gaze and hand pose estimation.
8229065
Learning from Simulated and Unsupervised Images through Adversarial Training
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Analysis of blockchain data is useful for both scienti c research and commercial applications. We present BlockSci, an open-source software platform for blockchain analysis. BlockSci is versatile in its support for di erent blockchains and analysis tasks. It incorporates an in-memory, analytical (rather than transactional) database, making it several hundred times faster than existing tools. We describe BlockSci's design and present four analyses that illustrate its capabilities. This is a working paper that accompanies the rst public release of BlockSci, available at github.com/citp/BlockSci. We seek input from the community to further develop the software and explore other potential applications.
proposed BlockSci REF , an open-source, scalable blockchain analysis system that supports various blockchains and analysis tasks.
41502911
BlockSci: Design and applications of a blockchain analysis platform
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Increasingly ubiquitous communication networks and connectivity via portable devices have engendered a host of applications in which sources, for example people and environmental sensors, send updates of their status to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited network resources. In this paper, we employ a timeaverage age metric for the performance evaluation of status update systems. We derive general methods for calculating the age metric that can be applied to a broad class of service systems. We apply these methods to queue-theoretic system abstractions consisting of a source, a service facility and monitors, with the model of the service facility (physical constraints) a given. The queue discipline of first-come-first-served (FCFS) is explored. We show the existence of an optimal rate at which a source must generate its information to keep its status as timely as possible at all its monitors. This rate differs from those that maximize utilization (throughput) or minimize status packet delivery delay. While our abstractions are simpler than their real-world counterparts, the insights obtained, we believe, are a useful starting point in understanding and designing systems that support real time status updates.
It was shown in REF that there exists an optimal generation rate for the status updates to minimize the average AoI, which is different from those that can maximize the throughput or minimize the delay.
12810772
Real-time status: How often should one update?
{ "venue": "2012 Proceedings IEEE INFOCOM", "journal": "2012 Proceedings IEEE INFOCOM", "mag_field_of_study": [ "Computer Science" ] }
Background: In recent years, biological event extraction has emerged as a key natural language processing task, aiming to address the information overload problem in accessing the molecular biology literature. The BioNLP shared task competitions have contributed to this recent interest considerably. The first competition (BioNLP'09) focused on extracting biological events from Medline abstracts from a narrow domain, while the theme of the latest competition (BioNLP-ST'11) was generalization and a wider range of text types, event types, and subject domains were considered. We view event extraction as a building block in larger discourse interpretation and propose a two-phase, linguistically-grounded, rule-based methodology. In the first phase, a general, underspecified semantic interpretation is composed from syntactic dependency relations in a bottom-up manner. The notion of embedding underpins this phase and it is informed by a trigger dictionary and argument identification rules. Coreference resolution is also performed at this step, allowing extraction of inter-sentential relations. The second phase is concerned with constraining the resulting semantic interpretation by shared task specifications. We evaluated our general methodology on core biological event extraction and speculation/negation tasks in three main tracks of BioNLP-ST'11 (GENIA, EPI, and ID). We achieved competitive results in GENIA and ID tracks, while our results in the EPI track leave room for improvement. One notable feature of our system is that its performance across abstracts and articles bodies is stable. Coreference resolution results in minor improvement in system performance. Due to our interest in discourse-level elements, such as speculation/negation and coreference, we provide a more detailed analysis of our system performance in these subtasks. The results demonstrate the viability of a robust, linguistically-oriented methodology, which clearly distinguishes general semantic interpretation from shared task specific aspects, for biological event extraction. Our error analysis pinpoints some shortcomings, which we plan to address in future work within our incremental system development methodology. Application of several intra-sentential transformation rules to the sentence fragments in the first column. The syntactic dependencies in the second column are the input to these rules and the embedding relations in the third column are the output. Kilicoglu and Bergler BMC Bioinformatics 2012, 13(Suppl 11):S7
Some rule-based systems have reported competitive results in this task, as well REF .
7926291
Biological event composition
{ "venue": "BMC Bioinformatics", "journal": "BMC Bioinformatics", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives comparable performance to that of the state of art supervised methods for single view depth estimation.
Garg et al. REF propose to learn a single-view depth estimation CNN using projection errors to a calibrated stereo twin for supervision.
299085
Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue
{ "venue": "ECCV", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.
REF treated the task as a multi-task learning problem where each task corresponds to a single word, and the task relatedness is derived from cooccurrence statistics in bilingual parallel corpora.
6758088
Inducing Crosslingual Distributed Representations of Words
{ "venue": "COLING", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
dbsfrael-Our goal is to erplnre characteristics nf the envirnnnient that provide opportunities fur caching, prefetching, coverage planning, and resource reseerratinn.-We cnndud a oneniontli measurenient study of locality phennmena among wireless weh users and their assoriation patterns on a major university rampw using the IEEE !3002.11 wirelesc infrastructure. W e evaluate the performance of different caching paradigms, such as single user cache. cache attached to an access pnint (AP), and peer-to-peer caching. In several settings such caching niechanisms could he heneficial. Unlike other measurement studies in wired networks in which 1 5 % to 40% of documents draw 70% of weh access, nur trace5 indirute,that 13% of unique LTR,LS draws this nnmher of weli accesses. In addition, the overall ideal hit ratio nf tlie user cache, cache attached to an access pnint. and peer-to-peer caching paradigms (where peers are caresident within an AP) are 51%. 55%. and U%, respectively. We distinguish wireless clients hksed on their inter-huilding mnhility. their risiic to APs, their continunus walks in the wireless infrastructure, and their wireless information access during these periods. We mndel the swnciatinns as a Markov chain using as .state information the ni&t recent AP visits. We-ran predict with high prnbahility (86%) the next AP with which B wireless client ! i l l assnociate. Also, there ace APs with a high percentage of user revisits. Such measurenients can benefit protwnls and algorithms that aim to inipmre the performance nf the wireless infrastructures hy lnad halancing, admission contrnl, and resource reservation acrnsc APs. 0-7803-8355
Our earlier work REF models the associations of each wireless client as a Markov-chain in which a state corresponds to an AP that the client has visited.
52316294
Analysis of wireless information locality and association patterns in a campus
{ "venue": "IEEE INFOCOM 2004", "journal": "IEEE INFOCOM 2004", "mag_field_of_study": [ "Computer Science" ] }
In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.
Silverstein et al. REF looked at key query patterns in the AltaVista search engine.
10184913
Analysis of a very large web search engine query log
{ "venue": "SIGF", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield is introduced in the traditional learning process in two alternative ways, depending on the location at which the shield is implemented. In the first one, the shield acts each time the learning agent is about to make a decision and provides a list of safe actions. In the second way, the shield is introduced after the learning agent. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.
Temporal logic is used in conjunction with an abstraction of the system dynamics to shield the learning process from unsafe actions in REF .
3132647
Safe Reinforcement Learning via Shielding
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
This article introduces a novel family of decentralised caching policies for wireless networks, referred to as spatial multi-LRU. Based on these, cache inventories are updated in a way that provides content diversity to users that are covered by, and thus have access to, more than one station. Two variations are proposed, the multi-LRU-One and -All, which differ in the number of replicas inserted in the involved edge caches. Che-like approximations are proposed to accurately predict their hit probability under the Independent Reference Model (IRM). For IRM traffic multi-LRU-One outperforms multi-LRU-All, whereas when the traffic exhibits temporal locality the -All variation can perform better.
Inspired from the Least Recently Used (LRU) replacement principle, a multi-coverage caching policy at the edge-nodes is proposed in REF , where caches are updated in a way that provides content diversity to users who are covered by more than one node.
1838597
null
null
Abstract. In this paper, we study the problem of semi-supervised image recognition, which is to learn classifiers using both labeled and unlabeled images. We present Deep Co-Training, a deep learning based method inspired by the Co-Training framework [1] . The original Co-Training learns two classifiers on two views which are data from different sources that describe the same instances. To extend this concept to deep learning, Deep Co-Training trains multiple deep neural networks to be the different views and exploits adversarial examples to encourage view difference, in order to prevent the networks from collapsing into each other. As a result, the co-trained networks provide different and complementary information about the data, which is necessary for the Co-Training framework to achieve good results. We test our method on SVHN, CIFAR-10/100 and ImageNet datasets, and our method outperforms the previous stateof-the-art methods by a large margin.
A recent approach REF extended the co-training strategy to 2D deep networks and multiple views, using adversarial examples to encourage view differences to boost performance.
3966049
Deep Co-Training for Semi-Supervised Image Recognition
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Whenever streaming of multimedia based data such as video, audio and text is performed traffic will be more and network becomes congested in mobile ad hoc networks. The present routing protocols are not able to cope up with this situation. It is observed that network congestion is the dominant reason for packet loss, longer delay and delay jitter in streaming video. Most of the present routing protocols are not designed to adapt to congestion control. We propose a new routing protocol, Congestion Adaptive AODV Routing Protocol (CA-AODV), to address the congestion issues considering delay, packet loss and routing overhead. To evaluate their performance, we have considered mpeg4 for streaming video data using network simulator (NS2). CA-AODV outperforms present protocols in delivery ratio and delay, while introducing less routing protocol overhead. The result demonstrates that integrating congestion adaptive mechanisms with AODV is a promising way to improve performance for heavy traffic load in multimedia based mobile ad hoc networks.
In REF , a congestion adaptive AODV (CA-AODV) routing protocol has been developed for streaming video in mobile ad hoc networks especially designed for multimedia applications.
18511813
CA-AODV: Congestion Adaptive AODV Routing Protocol for Streaming Video in Mobile Ad Hoc Networks
{ "venue": "IJCNS", "journal": "IJCNS", "mag_field_of_study": [ "Computer Science" ] }
Network survivability-the ability to maintain operation when one or a few network components fail-is indispensable for present-day networks. In this paper, we characterize three main components in establishing network survivability for an existing network, namely, (1) determining network connectivity, (2) augmenting the network, and (3) finding disjoint paths. We present a concise overview of network survivability algorithms, where we focus on presenting a few polynomial-time algorithms that could be implemented by practitioners and give references to more involved algorithms.
For an overview of recovery algorithms we refer to REF .
14276831
An Overview of Algorithms for Network Survivability
{ "venue": null, "journal": "International Scholarly Research Notices", "mag_field_of_study": [ "Computer Science" ] }
Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, features, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. The de facto standard NMS algorithm is still fully hand-crafted, suspiciously simple, and -being based on greedy clustering with a fixed distance threshold -forces a trade-off between recall and precision. We propose a new network architecture designed to perform NMS, using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Our approach shows promise providing improved localization and occlusion handling.
REF learn a deep neural network to perform the NMS function using predicted boxes and their corresponding scores.
7211062
Learning Non-maximum Suppression
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract Electronic medical records (EMRs) are critical, highly sensitive private information in healthcare, and need to be frequently shared among peers. Blockchain provides a shared, immutable and transparent history of all the transactions to build applications with trust, accountability and transparency. This provides a unique opportunity to develop a secure and trustable EMR data management and sharing system using blockchain. In this paper, we present our perspectives on blockchain based healthcare data management, in particular, for EMR data sharing between healthcare providers and for research studies. We propose a framework on managing and sharing EMR data for cancer patient care. In collaboration with Stony Brook University Hospital, we implemented our framework in a prototype that ensures privacy, security, availability, and fine-grained access control over EMR data. The proposed work can significantly reduce the turnaround time for EMR sharing, improve decision making for medical care, and reduce the overall cost.
As in REF , the authors proposed a privacy-aware framework for managing and sharing electronic medical record data for the cancer patient care based on block chain technology.
8776796
Secure and Trustable Electronic Medical Records Sharing using Blockchain
{ "venue": "AMIA ... Annual Symposium proceedings. AMIA Symposium", "journal": "AMIA ... Annual Symposium proceedings. AMIA Symposium", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
A novel framework is introduced for visual event detection. Visual events are viewed as stochastic temporal processes in the semantic concept space. In this concept-centered approach to visual event modeling, the dynamic pattern of an event is modeled through the collective evolution patterns of the individual semantic concepts in the course of the visual event. Video clips containing different events are classified by employing information about how well their dynamics in the direction of each semantic concept matches those of a given event. Results indicate that such a data-driven statistical approach is in fact effective in detecting different visual events such as exiting car, riot, and airplane flying.
Ebadollahi et al. detected novel visual events by modeling them as stochastic temporal processes in the semantic concept space REF .
6919512
Visual Event Detection using Multi-Dimensional Concept Dynamics
{ "venue": "2006 IEEE International Conference on Multimedia and Expo", "journal": "2006 IEEE International Conference on Multimedia and Expo", "mag_field_of_study": [ "Computer Science" ] }
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein, sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fuelled by the recent adaptation of a variety of enabling device technologies such as RFID tags and readers, near field communication (NFC) devices and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A cloud implementation using Aneka, which is based on interaction of private and public clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
The study REF presents a Cloud centric vision for worldwide implementation of IoT, and implements a cloud-based IoT application using Aneka cloud service and Microsoft Azure.
204982032
Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-With the advent of small form-factor devices, protocol standardization, and robust protocol implementations, multihop mobile networks are witnessing widespread deployment. The monitoring of such networks is crucial for their robust operation. To this end, this paper presents DAMON, a distributed system for monitoring multi-hop mobile networks. DAMON uses agents within the network to monitor network behavior and send collected measurements to data repositories. DAMON's generic architecture supports the monitoring of a wide range of protocol, device, and network parameters. Other key features of DAMON include seamless support for multiple repositories, auto-discovery of sinks by the agents, and resiliency of agents to repository failures. We have implemented DAMON agents that collect statistics on data traffic and the Ad hoc On-demand Distance Vector (AODV) routing protocol. We have used our implementation to monitor an ad hoc network at the 58th Internet Engineering Task Force (IETF) meeting held November 2003 in Minneapolis, MN. In this paper, we describe the architecture of DAMON and report on the performance of the IETF network using monitoring information collected by DAMON. Our network monitoring system is available online for use by other researchers.
REF , instead, proposed a generic architecture to monitor many parameters of network devices and protocols.
15202897
DAMON: a distributed architecture for monitoring multi-hop mobile networks
{ "venue": "2004 First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, 2004. IEEE SECON 2004.", "journal": "2004 First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks, 2004. IEEE SECON 2004.", "mag_field_of_study": [ "Computer Science" ] }
Data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. Many commercial products and services are now available, and all of the principal database management system vendors now have offerings in these areas. Decision support places some rather different requirements on database technology compared to traditional on-line transaction processing applications. This paper provides an overview of data warehousing and OLAP technologies, with an emphasis on their new requirements. We describe back end tools for extracting, cleaning and loading data into a data warehouse; multidimensional data models typical of OLAP; front end client tools for querying and data analysis; server extensions for efficient query processing; and tools for metadata management and for managing the warehouse. In addition to surveying the state of the art, this paper also identifies some promising research issues, some of which are related to problems that the database research community has worked on for years, but others are only just beginning to be addressed. This overview is based on a tutorial that the authors presented at
We refer to REF for an overview of the topic.
8125630
An overview of data warehousing and OLAP technology
{ "venue": "SGMD", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Despite rapid recent progress towards the development of quantum computers capable of providing computational advantages over classical computers, it seems likely that such computers will, initially at least, be required to run in a hybrid quantum-classical regime. This realisation has led to interest in hybrid quantum-classical algorithms allowing, for example, quantum computers to solve large problems despite having very limited numbers of qubits. Here we propose a hybrid paradigm for quantum annealers with the goal of mitigating a different limitation of such devices: the need to embed problem instances within the (often highly restricted) connectivity graph of the annealer. This embedding process can be costly to perform and may destroy any computational speedup. In order to solve many practical problems, it is moreover necessary to perform many, often related, such embeddings. We will show how, for such problems, a raw speedup that is negated by the embedding time can nonetheless be exploited to give a real speedup. As a proof-of-concept example we present an in-depth case study of a simple problem based on the maximum weight independent set problem. Although we do not observe a quantum speedup experimentally, the advantage of the hybrid approach is robustly verified, showing how a potential quantum speedup may be exploited and encouraging further efforts to apply the approach to problems of more practical interest.
More recently Abbott et al. REF suggested that the search for the embedding of the problem into the annealer topology can be critical in the quest to achieve a quantum speed-up for an hybrid quantum-classical algorithm.
209951889
A Hybrid Quantum-Classical Paradigm to Mitigate Embedding Costs in Quantum Annealing
{ "venue": "International Journal of Quantum Information 17(5), 1950042 (2019)", "journal": null, "mag_field_of_study": [ "Physics", "Computer Science" ] }
An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems that already incorporate known techniques such as dropout. Our ensemble model using different attention architectures yields a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker. 1
In REF , attention was used to allow the model to attend to a subset of the source words in the language translation task.
1998416
Effective Approaches to Attention-based Neural Machine Translation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Pfinder is a real-time system for tracking and interpretation of people. It runs on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to segment a person from a background scene, and implements heuristics which can find and track people's head and hands in a wide range of viewing conditions. Pfinder produces a real-time representation of a user useful for applications such as wireless interfaces, video databases, and low-bandwidth coding, without cumbersome wires or attached sensors.
In REF , Wren et al. demonstrate the system Pfinder which emp loyes a statistical model o f co lor and shape to obtain a 2D representation of head and hands.
67443912
Pfinder: real-time tracking of the human body
{ "venue": "Other Conferences", "journal": null, "mag_field_of_study": [ "Engineering", "Computer Science" ] }
Abstract-Reinforcement learning can enable complex, adaptive behavior to be learned automatically for autonomous robotic platforms. However, practical deployment of reinforcement learning methods must contend with the fact that the training process itself can be unsafe for the robot. In this paper, we consider the specific case of a mobile robot learning to navigate an a priori unknown environment while avoiding collisions. In order to learn collision avoidance, the robot must experience collisions at training time. However, high-speed collisions, even at training time, could damage the robot. A successful learning method must therefore proceed cautiously, experiencing only low-speed collisions until it gains confidence. To this end, we present an uncertainty-aware model-based learning algorithm that estimates the probability of collision together with a statistical estimate of uncertainty. By formulating an uncertainty-dependent cost function, we show that the algorithm naturally chooses to proceed cautiously in unfamiliar environments, and increases the velocity of the robot in settings where it has high confidence. Our predictive model is based on bootstrapped neural networks using dropout, allowing it to process raw sensory inputs from high-bandwidth sensors such as cameras. Our experimental evaluation demonstrates that our method effectively minimizes dangerous collisions at training time in an obstacle avoidance task for a simulated and real-world quadrotor, and a realworld RC car. Videos of the experiments can be found at https://sites.google.com/site/probcoll.
The approach of REF develops an uncertainty-aware reinforcement learning algorithm to estimate the probability of a mobile robot colliding with an obstacle in an unknown environment.
5349381
Uncertainty-Aware Reinforcement Learning for Collision Avoidance
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed by several sememes. Since sememes are not explicit for each word, people manually annotate word sememes and form linguistic common-sense knowledge bases. In this paper, we present that, word sememe information can improve word representation learning (WRL), which maps words into a low-dimensional semantic space and serves as a fundamental step for many NLP tasks. The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately. More specifically, we follow the framework of Skip-gram and present three sememe-encoded models to learn representations of sememes, senses and words, where we apply the attention scheme to detect word senses in various contexts. We conduct experiments on two tasks including word similarity and word analogy, and our models significantly outperform baselines. The results indicate that WRL can benefit from sememes via the attention scheme, and also confirm our models being capable of correctly modeling sememe information.
REF claimed that using word sememe information in HowNet can improve word representation.
9471817
Improved Word Representation Learning with Sememes
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Being embedded in the physical world, sensor networks present a wide range of bugs and misbehavior qualitatively different from those in most distributed systems. Unfortunately, due to resource constraints, programmers must investigate these bugs with only limited visibility into the application. This paper presents the design and evaluation of Sympathy, a tool for detecting and debugging failures in sensor networks. Sympathy has selected metrics that enable efficient failure detection, and includes an algorithm that root-causes failures and localizes their sources in order to reduce overall failure notifications and point the user to a small number of probable causes. We describe Sympathy and evaluate its performance through fault injection and by debugging an active application, ESS, in simulation and deployment. We show that for a broad class of data gathering applications, it is possible to detect and diagnose failures by collecting and analyzing a minimal set of metrics at a centralized sink. We have found that there is a tradeoff between notification latency and detection accuracy; that additional metrics traffic does not always improve notification latency; and that Sympathy's process of failure localization reduces primary failure notifications by at least 50% in most cases.
In sensor networks, REF examines simple metrics on network performance.
7165570
Sympathy for the sensor network debugger
{ "venue": "SenSys '05", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Communication security and reliability are two important issues in any network. A typical communication task in a wireless sensor network is for every sensor node to sense its local environment, and upon request, send data of interest back to a base station (BS). In this paper, a hybrid multipath scheme (H-SPREAD) to improve both the security and reliability of this task in a potentially hostile and unreliable wireless sensor network is proposed. The new scheme is based on a distributed N -to-1 multipath discovery protocol, which is able to find multiple node-disjoint paths from every sensor node to the BS simultaneously in one route discovery process. Then, a hybrid multipath data collection scheme is proposed. On the one hand, end-to-end multipath data dispersion, combined with secret sharing, enhances the security of the end-to-end data delivery in the sense that the compromise of a small number of paths will not result in the compromise of a data message in the face of adversarial nodes. On the other hand, in the face of unreliable wireless links and/or sensor nodes, alternate path routing available at each sensor node improves the reliability of each packet transmission significantly. The extensive simulation results show that the hybrid multipath scheme is very efficient in improving both the security and reliability of the data collection service seamlessly.
In their seminal paper, REF show mathematically and empirically that both the security and reliability of a wireless sensor network can be improved through multipath routing.
8211609
H-SPREAD: a hybrid multipath scheme for secure and reliable data collection in wireless sensor networks
{ "venue": "IEEE Transactions on Vehicular Technology", "journal": "IEEE Transactions on Vehicular Technology", "mag_field_of_study": [ "Computer Science" ] }
Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
REF investigate sequence-to-sequence models that consist of a neural network encoder and decoder for machine translation.
7961699
Sequence to Sequence Learning with Neural Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.
In REF , the authors present a comprehensive study on the effect of low precision fixed point computation for deep learning.
2547043
Deep Learning with Limited Numerical Precision
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Background: Experts consider health information technology key to improving efficiency and quality of health care. Purpose: To systematically review evidence on the effect of health information technology on quality, efficiency, and costs of health care. The authors systematically searched the Englishlanguage literature indexed in MEDLINE (1995 ( to January 2004, the Cochrane Central Register of Controlled Trials, the Cochrane Database of Abstracts of Reviews of Effects, and the Periodical Abstracts Database. We also added studies identified by experts up to April 2005. Descriptive and comparative studies and systematic reviews of health information technology. Two reviewers independently extracted information on system capabilities, design, effects on quality, system acquisition, implementation context, and costs. Data Synthesis: 257 studies met the inclusion criteria. Most studies addressed decision support systems or electronic health records. Approximately 25% of the studies were from 4 academic institutions that implemented internally developed systems; only 9 studies evaluated multifunctional, commercially developed systems. Three major benefits on quality were demonstrated: increased adherence to guideline-based care, enhanced surveillance and monitoring, and decreased medication errors. The primary domain of improvement was preventive health. The major efficiency benefit shown was decreased utilization of care. Data on another efficiency measure, time utilization, were mixed. Empirical cost data were limited. Limitations: Available quantitative research was limited and was done by a small number of institutions. Systems were heterogeneous and sometimes incompletely described. Available financial and contextual data were limited. Four benchmark institutions have demonstrated the efficacy of health information technologies in improving quality and efficiency. Whether and how other institutions can achieve similar benefits, and at what costs, are unclear. Ann Intern Med. 2006;144:742-752. www.annals.org For author affiliations, see end of text. ealth care experts, policymakers, payers, and consumers consider health information technologies, such as electronic health records and computerized provider order entry, to be critical to transforming the health care industry (1-7). Information management is fundamental to health care delivery (8) . Given the fragmented nature of health care, the large volume of transactions in the system, the need to integrate new scientific evidence into practice, and other complex information management activities, the limitations of paper-based information management are intuitively apparent. While the benefits of health information technology are clear in theory, adapting new information systems to health care has proven difficult and rates of use have been limited (9 -11). Most information technology applications have centered on administrative and financial transactions rather than on delivering clinical care (12). The Agency for Healthcare Research and Quality asked us to systematically review evidence on the costs and benefits associated with use of health information technology and to identify gaps in the literature in order to provide organizations, policymakers, clinicians, and consumers an understanding of the effect of health information technology on clinical care (see evidence report at www.ahrq .gov). From among the many possible benefits and costs of implementing health information technology, we focus here on 3 important domains: the effects of health information technology on quality, efficiency, and costs. METHODS We used expert opinion and literature review to develop analytic frameworks (Table) that describe the components involved with implementing health information technology, types of health information technology systems, and the functional capabilities of a comprehensive health information technology system (13). We modified a framework for clinical benefits from the Institute of Medicine's 6 aims for care (2) and developed a framework for costs using expert consensus that included measures such as initial costs, ongoing operational and maintenance costs, fraction of health information technology penetration, and productivity gains. Financial benefits were divided into monetized benefits (that is, benefits expressed in dollar terms) and nonmonetized benefits (that is, benefits that could not be directly expressed in dollar terms but could be assigned dollar values). We performed 2 searches (in November 2003 and January 2004) of the English-language literature indexed in MEDLINE (1995 to January 2004) using a broad set of terms to maximize sensitivity. (See the full list of search terms and sequence of queries in the full evidence report at www.ahrq.gov.) We also searched the Cochrane Central Register of Controlled Trials, the Cochrane Database of Abstracts of Reviews of Effects, and the Periodical Abstracts Database; hand-searched personal libraries kept by content experts and project staff; and mined bibliographies of articles and systematic reviews for citations. We asked content experts to identify unpublished literature. Finally, we asked content experts and peer reviewers to identify newly published articles up to April 2005. Two reviewers independently selected for detailed review the following types of articles that addressed the workings or implementation of a health technology system: systematic reviews, including meta-analyses; descriptive "qualitative" reports that focused on exploration of barriers; and quantitative reports. We classified quantitative reports as "hypothesis-testing" if the investigators compared data between groups or across time periods and used statistical tests to assess differences. We further categorized hypothesis-testing studies (for example, randomized and nonrandomized, controlled trials, controlled before-and-after studies) according to whether a concurrent comparison group was used. Hypothesis-testing studies without a concurrent comparison group included those using simple pre-post, time-series, and historical control designs. Remaining hypothesis-testing studies were classified as crosssectional designs and other. We classified quantitative reports as a "predictive analysis" if they used methods such as statistical modeling or expert panel estimates to predict what might happen with implementation of health information technology rather than what has happened. These studies typically used hybrid methods-frequently mixing primary data collection with secondary data collection plus expert opinion and assumptions-to make quantitative estimates for data that had otherwise not been empirically measured. Cost-effectiveness and cost-benefit studies generally fell into this group. Two reviewers independently appraised and extracted details of selected articles using standardized abstraction forms and resolved discrepancies by consensus. We then used narrative synthesis methods to integrate findings into descriptive summaries. Each institution that accounted for more than 5% of the total sample of 257 papers was designated as a benchmark research leader. We grouped syntheses by institution and by whether the systems were commercially or internally developed. This work was produced under Agency for Healthcare Research and Quality contract no. 2002. In addition to the Agency for Healthcare Research and Quality, this work was also funded by the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, and the Office of Disease Prevention and Health Promotion, U.S. Department of Health and Human Services. The funding sources had no role in the design, analysis, or interpretation of the study or in the decision to submit the manuscript for publication. Of 867 articles, we rejected 141 during initial screening: 124 for not having health information technology as the subject, 4 for not reporting relevant outcomes, and 13 for miscellaneous reasons (categories not mutually exclusive). Of the remaining 726 articles, we excluded 469 descriptive reports that did not examine barriers (Figure) . We Health information technology has been shown to improve quality by increasing adherence to guidelines, enhancing disease surveillance, and decreasing medication errors. Much of the evidence on quality improvement relates to primary and secondary preventive care. The major efficiency benefit has been decreased utilization of care. Effect on time utilization is mixed. Empirically measured cost data are limited and inconclusive. Most of the high-quality literature regarding multifunctional health information technology systems comes from 4 benchmark research institutions. Little evidence is available on the effect of multifunctional commercially developed systems. Little evidence is available on interoperability and consumer health information technology. A major limitation of the literature is its generalizability. Improving Patient Care The reports addressed the following types of primary systems: decision support aimed at providers (63%), electronic health records (37%), and computerized provider order entry (13%). Specific functional capabilities of systems that were described in reports included electronic documentation (31%), order entry (22%), results management (19%), and administrative capabilities (18%). Only 8% of the described systems had specific consumer health capabilities, and only 1% had capabilities that allowed systems from different facilities to connect with each other and share data interoperably. Most studies (n ϭ 125) assessed the effect of the systems in the outpatient setting. Of the 213 hypothesis-testing studies, 84 contained some data on costs. Several studies assessed interventions with limited functionality, such as stand-alone decision support systems (15) (16) (17) . Such studies provide limited information about issues that today's decision makers face when selecting and implementing health information technology. Thus, we preferentially highlight in the following paragraphs studies that were conducted in the United States, that had empirically measured data on multifunctional systems, and that included health information and data storage in the form of electronic documentation or order-entry capabilities. Predictive analyses were excluded. Seventy-six studies met these criteria: 54 from the 4 benchmark leaders and 22 from other institutions. The health information technology systems evaluated by the benchmark leaders shared many characteristics. All the systems were multifunctional and included decision support, all were internally developed by research experts at the respective academic institutions, and all had capabilities added incrementally over several years. Furthermore, most reported studies of these systems used research designs with high internal validity (for example, randomized, controlled trials). Appendix Table 1 (18 -71) (available at www.annals .org) provides a structured summary of each study from the 4 benchmark institutions. This table also includes studies that met inclusion criteria not highlighted in this synthesis (26, 27, 30, 39, 40, 53, 62, 65, 70, 71) . The data supported 5 primary themes (3 directly related to quality and 2 addressing efficiency). Implementation of a multifunctional health information technology system had the following effects: 1) increased delivery of care in adherence to guidelines and protocols, 2) enhanced capacity to perform surveillance and monitoring for disease conditions and care delivery, 3) reductions in rates of medication errors, 4) decreased utilization of care, and 5) mixed effects on time utilization. The major effect of health information technology on quality of care was its role in increasing adherence to guideline-or protocol-based care. Decision support, usually in the form of computerized reminders, was a component of all adherence studies. The decision support functions were usually embedded in electronic health records or computerized provider order-entry systems. Electronic health records systems were more frequently examined in the outpatient setting; provider order-entry systems were more often assessed in the inpatient setting. Improvements in processes of care delivery ranged from absolute increases of 5 to 66 percentage points, with most increases clustering in the range of 12 to 20 percentage points. Twelve of the 20 adherence studies examined the ef- fects of health information technology on enhancing preventive health care delivery (18, 21-25, 29, 31-33, 35, 37). Eight studies included measures for primary preventive care (18, 21-25, 31, 33), 4 studies included secondary preventive measures (29, 33, 35, 37), and 1 study assessed screening (not mutually exclusive) (32). The most common primary preventive measures examined were rates of influenza vaccination (improvement, 12 to 18 percentage points), pneumococcal vaccinations (improvement, 20 to 33 percentage points), and fecal occult blood testing (improvement, 12 to 33 percentage points) (18, 22, 24). Three studies examined the effect of health information technology on secondary preventive care for complications related to hospitalization. One clinical controlled trial that used computerized surveillance and identification of high-risk patients plus alerts to physicians demonstrated a 3.3-percentage point absolute decrease (from 8.2% to 4.9%) in a combined primary end point of deep venous thrombosis and pulmonary embolism in high-risk hospitalized patients (29). One time-series study showed a 5-percentage point absolute decrease in prevention of pressure ulcers in hospitalized patients (35), and another showed a 0.4 -percentage point absolute decrease in postoperative infections (37). While most evidence for health information technology-related quality improvement through enhanced adherence to guidelines focused on preventive care, other studies covered a diverse range for types of care, including hyper- The second theme showed the capacity of health information technology to improve quality of care through clinical monitoring based on large-scale screening and aggregation of data. These studies demonstrated how health information technology can support new ways of delivering care that are not feasible with paper-based information management. In one study, investigators screened more than 90 000 hospital admissions to identify the frequency of adverse drug events (43); they found a rate of 2.4 events/ 100 admissions. Adverse drug events were associated with an absolute increase in crude mortality of 2.45 percentage points and an increase in costs of $2262, primarily due to a 1.9-day increase in length of stay. Two studies from Evans and colleagues (44, 45) reported using an electronic health record to identify adverse drug events, examine their cause, and develop programs to decrease their frequency. In the first study, the researchers designed interventions on the basis of electronic health record surveillance that increased absolute adverse drug event identification by 2.36 percentage points (from 0.04% to 2.4%) and decreased absolute adverse drug event rates by 5.4 percentage points (from 7.6% to 2.2%) (44). The report did not describe details of the interventions used to reduce adverse drug events. In the second study, the researchers used electronic health record surveillance of nearly 61 000 inpatient admissions to determine that adverse drug events cause a 1.9-day increase in length of hospital stay and an increase of $1939 in charges (45). Three studies from the Veterans Affairs system examined the surveillance and data aggregation capacity of health information technology systems for facilitating quality-of-care measurement. Automated quality measurement was found to be less labor intensive, but 2 of the studies found important methodologic limitations that affected the validity of automated quality measurement. For example, 1 study found high rates of false-positive results with use of automated quality measurement and indicated that such approaches may yield biased results (41). The second study found that automated queries from computerized disease registries underestimated completion of quality-ofcare processes when compared with manual chart abstraction of electronic health records and paper chart sources (42). Finally, 2 studies examined the role of health information technology surveillance systems in identifying infectious disease outbreaks. The first study found that use of a county-based electronic system for reporting results led to a 29 -percentage point absolute increase in cases of shigellosis identified during an outbreak and a 2.5-day decrease in identification and public health reporting time (38). The second study showed a 14 -percentage point absolute increase in identification of hospital-acquired infections and a 65% relative decrease in identification time (from 130 to 46 hours) (46). The third health information technology-mediated effect on quality was a reduction in medication errors. Two studies of computerized provider order entry from LDS Hospital (51, 52) showed statistically significant decreases in adverse drug events, and a third study by Bates and colleagues (49) showed a non-statistically significant trend toward decreased drug events and a large decrease in medication errors. The first LDS Hospital study used a cohort with historical control design to evaluate the effect of computerized alerts on antibiotic use (52). Compared with a 2-year preintervention period, many statistically significant improvements were noted, including a decrease in antibiotic-associated adverse drug events (from 28 to 4 events), decreased length of stay (from 13 to 10 days), and a reduction in total hospital costs (from $35 283 to $26 315). The second study from LDS Hospital demonstrated a 0.6 -percentage point (from 0.9% to 0.3%) absolute decrease in antibiotic-associated adverse drug events (51). Bates and colleagues examined adverse events and showed a 17% non-statistically significant trend toward a decrease in these events (49). Although this outcome did not reach statistical significance, adverse drug events were not the main focus of the evaluation. The primary end point for this study was a surrogate end point for adverse drug events: nonintercepted serious medication errors. This end point demonstrated a statistically significant 55% relative decrease. The results from this trial were further supported by a second, follow-up study by the same researchers examining the long-term effect of the implemented system (48). After the first published study, the research team analyzed adverse drug events not prevented by computerized provider order entry, and the level of decision support was increased. This second study used a time-series design and found an 86% relative decrease in nonintercepted serious medication errors. Health information technology systems also decreased medication errors by improving medication dosing. Improvements in dosing ranged from 12% to 21%; the primary outcome examined was doses prescribed within the recommended range and centered on antibiotics and anticoagulation (47, 50, 51). Studies examined 2 primary types of technology-related effects on efficiency: utilization of care and provider time. Eleven studies examined the effect of health information technology systems on utilization of care. Eight showed decreased rates of health services utilization (54 -61); computerized provider order-entry systems that provided decision support at the point of care were the primary interventions leading to decreased utilization. Types of decision support included automated calculation of pretest probability for diagnostic tests, display of previous test results, display of laboratory test costs, and computerized reminders. Absolute decreases in utilization rates ranged from 8.5 to 24 percentage points. The primary services affected were laboratory and radiology testing. Most studies did not judge the appropriateness of the decrease in service utilization but instead reported the effect of health information technology on the level of utilization. Most studies did not directly measure cost savings. Instead, researchers translated nonmonetized decreases in services into monetized estimates through the average cost of the examined service at that institution. One large study from Tierney and colleagues examined direct total costs per admission as its main end point and found a 12.7% absolute decrease (from $6964 to $6077) in costs associated with a 0.9-day decrease in length of stay (57). The effect of health information technology on provider time was mixed. Two studies from the Regenstrief Institute examining inpatient order entry showed increases in physician time related to computer use (57, 64). Another study on outpatient use of electronic health records from Partners Health Care showed a clinically negligible increase in clinic visit time of 0.5 minute (67). Studies suggested that time requirements decreased as physicians grew used to the systems, but formal long-term evaluations were not available. Two studies showed slight decreases in documentation-related nursing time (68, 69) that were due to the streamlining of workflow. One study examined overall time to delivery of care and found an 11% decrease in time to deliver treatment through the use of computerized order entry with alerts to physician pagers (66). Data on costs were more limited than the evidence on quality and efficiency. Sixteen of the 54 studies contained some data on costs (20, 28, 31, 36, 43, 47, 50 -52, 54 -58, 63, 71). Most of the cost data available from the institutional leaders were related to changes in utilization of services due to health information technology. Only 3 studies had cost data on aspects of system implementation or maintenance. Two studies provided computer storage costs; these were more than 20 years old, however, and therefore were of limited relevance (28, 58). The third reported that system maintenance costs were $700 000 (31). Because these systems were built, implemented, and evaluated incrementally over time, and in some cases were supported by research grants, it is unlikely that total development and implementation costs could be calculated accurately and in full detail. Appendix Table 2 (available at www.annals.org) summarizes the 22 studies (72-93) from the other institutions. Most of these studies evaluated internally developed systems in academic institutions. The types of benefits found in these studies were similar to those demonstrated in benchmark institutions, although an additional theme was related to initial implementation costs. Unlike most studies from the benchmark institutions, which used randomized or controlled clinical trial designs, the most common designs of the studies from other institutions were pre-post and time-series designs that lacked a concurrent comparison group. Thirteen of the 22 studies evaluated internally developed systems (72-84). Only 9 evaluated commercial health information technology systems. Because many decision makers are likely to consider implementing a commercially developed system rather than internally developing their own, we detail these 9 studies in the following paragraphs. Two studies examined the effect of systems on utilization of care (85, 86). Both were set in Kaiser Permanente's Pacific Northwest region and evaluated the same electronic health record system (Epic Systems Corp., Verona, Wisconsin) at different periods through time-series designs. One study (1994 -1997) supported the findings of the benchmark institutions, showing decreased utilization of 2 radiology tests after implementation of electronic health records (85), while the second study (2000 -2004) showed no conclusive decreases in utilization of radiology and laboratory services (86). Unlike the reports from the benchmark institutions, this second study also showed no statistically significant improvements in 3 process measures of quality. It did find a statistically significant decrease in age-adjusted total office visits per member: a relative decrease of 9% in year 2 after implementation of the electronic health record. Telephone-based care showed a relative increase of 65% over the same time. A third study evaluated this electronic health record and focused on efficiency; it showed that physicians took 30 days to return to their baseline level of productivity after implementation and that visit time increased on average by 2 minutes per encounter (87). Two studies that were part of the same randomized trial from Rollman and colleagues, set at the University of Pittsburgh, examined the use of an electronic health record (MedicaLogic Corp., Beaverton, Oregon) with decision support in improving care for depression (88, 89). The first study evaluated electronic health record-based monitoring to enhance depression screening. As in the monitoring studies from the benchmark institutions, electronic health record screening was found to support new ways of organizing care. Physicians agreed with 65% of the computerscreened diagnoses 3 days after receiving notification of the results. In the second phase of the trial, 2 different electronic health record-based decision support interventions were implemented to improve adherence to guidelinebased care for depression. Unlike the effects on adherence seen in the benchmark institutions, neither intervention showed statistically significant differences when compared with usual care. Two pre-post studies from Ohio State University evaluated the effect of a commercial computerized order-entry system (Siemens Medical Solutions Health Services Corp., Malvern, Pennsylvania) on time utilization and medication errors (90, 91). As in the benchmark institutions, time to care dramatically decreased compared with the period before the order-entry system was implemented. Relative decreases in other outcomes were as follows: medication turnaround time, 64% (90) and 73% (91); radiology completion time, 43% (90) and 24% (91); and results reporting time, 25% (90). Use of computerized provider order entry had large effects on medication errors in both studies. Before implementation, 11.3% (90) and 13% (91) of orders had transcription errors; afterward, these errors were entirely eliminated. One study assessed length of stay and found that it decreased 5%; total cost of hospitalization, however, showed no statistically significant differences (90). In contrast, a third study examining the effect of order entry on nurse documentation time showed no benefits (92). In contrast to all previous studies on computer orderentry systems, a study by Koppel and colleagues used a mixed quantitative-qualitative approach to investigate the possible role of such a system (Eclipsys Corp., Boca Raton, Florida) in facilitating medication prescribing errors (93). Twenty-two types of medication error risks were found to be facilitated by computer order entry, relating to 2 basic causes: fragmentation of data and flaws in human-machine interface. These 9 studies infrequently reported or measured data on costs and contextual factors. Two reported information on costs (90, 92). Neither described the total initial costs of purchasing or implementing the system being evaluated. Data on contextual factors such as reimbursement mix, degree of capitation, and barriers encountered during implementation were scant; only 2 studies included such information. The study by Koppel and colleagues (93) included detailed contextual information related to human factors. One health record study reported physician classroom training time of 16 hours before implementation (87). Another order-entry study reported that nurses received 16 hours of training, clerical staff received 8 hours, and physicians received 2 to 4 hours (91). To date, the health information technology literature has shown many important quality-and efficiency-related benefits as well as limitations relating to generalizability and empirical data on costs. Studies from 4 benchmark leaders demonstrate that implementing a multifunctional system can yield real benefits in terms of increased delivery of care based on guidelines (particularly in the domain of preventive health), enhanced monitoring and surveillance activities, reduction of medication errors, and decreased rates of utilization for potentially redundant or inappropriate care. However, the method used by the benchmark leaders to get to this point-the incremental development over many years of an internally designed system led by academic research champions-is unlikely to be an option for most institutions contemplating implementation of health information technology. Studies from these 4 benchmark institutions have demonstrated the efficacy of health information technology for improving quality and efficiency. However, the effectiveness of these technologies in the practice settings where most health care is delivered remains less clear. Effectiveness and generalizability are of particular importance in this field because health information technologies are tools that support the delivery of care-they do not, in and of themselves, alter states of disease or of health. As such, how these tools are used and the context in which they are implemented are critical (94 -96). For providers considering a commercially available system installed as a package, only a limited body of literature is available to inform decision making. The available evidence comes mainly from time-series or pre-post studies, derives from a staff-model managed care organization or academic health centers, and concerns a limited number of process measures. These data, in general, support the findings of studies from the benchmark institutions on the effect of health information technology in reducing utilization and medication errors. However, they do not support the findings of increased adherence to protocol-based care. Published evidence of the information needed to make informed decisions about acquiring and implementing health information technology in community settings is nearly nonexistent. For example, potentially important evidence related to initial capital costs, effect on provider productivity, resources required for staff training (such as time and skills), and workflow redesign is difficult to locate in the peer-reviewed literature. Also lacking are key data on financial context, such as degree of capitation, which has been suggested by a model to be an important factor in defining the business case for electronic health record use (97). Several systematic reviews related to health information technology have been done. However, they have been limited to specific systems, such as computerized provider order entry (98); capabilities, such as computerized reminders (99, 100); or clinical specialty (101). No study to date has reviewed a broad range of health information technologies. In addition, to make our findings as relevant as possible to the broad range of stakeholders interested in health information technology, we developed a Webhosted database of our research findings. This database allows different stakeholders to find the literature most relevant to their implementation circumstances and their information needs. This study has several important limitations. The first relates to the quantity and scope of the literature. Although we did a comprehensive search, we identified only a limited set of articles with quantitative data. In many important domains, we found few studies. This was particularly true of health information technology applications relevant to consumers and to interoperability, areas critical to the capacity for health information technology to fundamentally change health care. A second limitation relates to synthesizing the effect of a broad range of technologies. We attempted to address this limitation by basing our work on well-defined analytic frameworks and by identifying not only the systems used but also their functional capabilities. A third relates to the heterogeneity in reporting. Descriptions of health information technology systems were often very limited, making it difficult to assess whether some system capabilities were absent or simply not reported. Similarly, limited information was reported on the overall implementation process and organizational context. This review raises many questions central to a broad range of stakeholders in health care, including providers, consumers, policymakers, technology experts, and private sector vendors. Adoption of health information technology has become one of the few widely supported, bipartisan initiatives in the fragmented, often contentious health care sector (102). Currently, numerous pieces of state and federal legislation under consideration seek to expand adoption of health information technology (103-105). Health care improvement organizations such as the Leapfrog Group are strongly advocating adoption of health information technology as a key aspect of health care reform. Policy discussions are addressing whether physician reimbursement should be altered, with higher reimbursements for those who use health information technology (106). Two critical questions that remain are 1) what will be the benefits of these initiatives and 2) who will pay and who will benefit? Regarding the former, a disproportionate amount of literature on the benefits that have been realized comes from a small set of early-adopter institutions that implemented internally developed health information technology systems. These institutions had considerable expertise in health information technology and implemented systems over long periods in a gradual, iterative fashion. Missing from this literature are data on how to implement multifunctional health information technology systems in other health care settings. Internally developed systems are unlikely to be feasible as models for broad-scale use of health information technology. Most practices and organizations will adopt a commercially developed health information technology system, and, given logistic constraints and budgetary issues, their implementation cycles will be much shorter. The limited quantitative and qualitative description of the implementation context significantly hampers how the literature on health information technology can inform decision making by a broad array of stakeholders interested in this field. With respect to the business case for health information technology, we found little information that could empower stakeholders to judge for themselves the financial effects of adoption. For instance, basic cost data needed to determine the total cost of ownership of a system or of the return on investment are not available. Without these data, the costs of health information technology systems can be estimated only through complex predictive analysis and statistical modeling methods, techniques generally not available outside of research. One of the chief barriers to adoption of health information technology is the misalignment of incentives for its use (107, 108). Specifying policies to address this barrier is hindered by the lack of cost data. This review suggests several important future directions in the field. First, additional studies need to evaluate commercially developed systems in community settings, and additional funding for such work may be needed. Second, more information is needed regarding the organizational change, workflow redesign, human factors, and project management issues involved with realizing benefits from health information technology. Third, a high priority must be the development of uniform standards for the reporting of research on implementation of health information technology, similar to the Consolidated Standards of Reporting Trials (CONSORT) statements for randomized, controlled trials and the Quality of Reporting of Meta-analyses (QUORUM) statement for meta-analyses (109, 110). Finally, additional work is needed on interoperability and consumer health technologies, such as the personal health record. The advantages of health information technology over paper records are readily discernible. However, without better information, stakeholders interested in promoting or considering adoption may not be able to determine what benefits to expect from health information technology use, how best to implement the system in order to maximize the value derived from their investment, or how to direct policy aimed at improving the quality and efficiency delivered by the health care sector as a whole.
Another study by Chaudhry et al. REF surveys statistics about IT use in hospitals.
5573811
Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care
{ "venue": "Annals of Internal Medicine", "journal": "Annals of Internal Medicine", "mag_field_of_study": [ "Medicine" ] }
User reviews of mobile apps often contain complaints or suggestions which are valuable for app developers to improve user experience and satisfaction. However, due to the large volume and noisy-nature of those reviews, manually analyzing them for useful opinions is inherently challenging. To address this problem, we propose MARK, a keyword-based framework for semi-automated review analysis. MARK allows an analyst describing his interests in one or some mobile apps by a set of keywords. It then finds and lists the reviews most relevant to those keywords for further analysis. It can also draw the trends over time of those keywords and detect their sudden changes, which might indicate the occurrences of serious issues. To help analysts describe their interests more effectively, MARK can automatically extract keywords from raw reviews and rank them by their associations with negative reviews. In addition, based on a vector-based semantic representation of keywords, MARK can divide a large set of keywords into more cohesive subsets, or suggest keywords similar to the selected ones.
Contrary to the fine-grained extraction of app features, the approach of REF extracts all potential keywords from user reviews and ranks them based on the review rating and occurrence frequency.
579871
Mining User Opinions in Mobile App Reviews: A Keyword-Based Approach (T)
{ "venue": "2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)", "journal": "2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)", "mag_field_of_study": [ "Computer Science" ] }
Human eye-tracking studies have shown that gaze fixations are biased toward the center of natural scene stimuli ("center bias"). This bias contaminates the evaluation of computational models of attention and oculomotor behavior. Here we recorded eye movements from 17 participants watching 40 MTV-style video clips (with abrupt scene changes every 2-4 s), to quantify the relative contributions of five causes of center bias: photographer bias, motor bias, viewing strategy, orbital reserve, and screen center. Photographer bias was evaluated by five naive human raters and correlated with eye movements. The frequently changing scenes in MTV-style videos allowed us to assess how motor bias and viewing strategy affected center bias across time. In an additional experiment with 5 participants, videos were displayed at different locations within a large screen to investigate the influences of orbital reserve and screen center. Our results demonstrate quantitatively for the first time that center bias is correlated strongly with photographer bias and is influenced by viewing strategy at scene onset, while orbital reserve, screen center, and motor bias contribute minimally. We discuss methods to account for these influences to better assess computational models of visual attention and gaze using natural scene stimuli.
For instance, Tseng et al. REF showed a contribution of photographer bias, viewing strategy, and to a lesser extent, motor, re-centering, and screen center biases to the center bias.
13871871
Quantifying center bias of observers in free viewing of dynamic natural scenes
{ "venue": "Journal of vision", "journal": "Journal of vision", "mag_field_of_study": [ "Psychology", "Medicine" ] }
While evolutionary algorithms (EAs) have long offered an alternative approach to optimization, in recent years backpropagation through stochastic gradient descent (SGD) has come to dominate the fields of neural network optimization and deep learning. One hypothesis for the absence of EAs in deep learning is that modern neural networks have become so high dimensional that evolution with its inexact gradient cannot match the exact gradient calculations of backpropagation. Furthermore, the evaluation of a single individual in evolution on the big data sets now prevalent in deep learning would present a prohibitive obstacle towards efficient optimization. This paper challenges these views, suggesting that EAs can be made to run significantly faster than previously thought by evaluating individuals only on a small number of training examples per generation. Surprisingly, using this approach with only a simple EA (called the limited evaluation EA or LEEA) is competitive with the performance of the state-of-the-art SGD variant RMSProp on several benchmarks with neural networks with over 1,000 weights. More investigation is warranted, but these initial results suggest the possibility that EAs could be the first viable training alternative for deep learning outside of SGD, thereby opening up deep learning to all the tools of evolutionary computation.
In REF , a Limited Evaluation Evolutionary Algorithm (LEEA) is applied to optimize the weights of the network.
13606928
Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent in Neural Networks
{ "venue": "GECCO '16", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.
Han et al. REF proposed a component-based framework for matching composite sketch to photo.
13547735
Matching Composite Sketches to Face Photos: A Component-Based Approach
{ "venue": "IEEE Transactions on Information Forensics and Security", "journal": "IEEE Transactions on Information Forensics and Security", "mag_field_of_study": [ "Computer Science" ] }
The problem of anomaly detection has been studied for a long time. In short, anomalies are abnormal or unlikely things. In financial networks, thieves and illegal activities are often anomalous in nature. Members of a network want to detect anomalies as soon as possible to prevent them from harming the network's community and integrity. Many Machine Learning techniques have been proposed to deal with this problem; some results appear to be quite promising but there is no obvious superior method. In this paper, we consider anomaly detection particular to the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use three unsupervised learning methods including k-means clustering, Mahalanobis distance, and Unsupervised Support Vector Machine (SVM) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes.
Thai T. Pham and S. Lee REF propose a method for detecting anomalies on a Bitcoin transaction network by detecting which users and transactions are the most suspicious.
16069399
Anomaly Detection in Bitcoin Network Using Unsupervised Learning Methods
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
Abstractive: REF firstly apply neural networks for text summarization by using a local attention-based model to generate word conditioned on the input sentence.
1918428
A Neural Attention Model for Abstractive Sentence Summarization
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Networks have in recent years emerged as an invaluable tool for describing and quantifying complex systems in many branches of science [1, 2, 3]. Recent studies suggest that networks often exhibit hierarchical organization, where vertices divide into groups that further subdivide into groups of groups, and so forth over multiple scales. In many cases these groups are found to correspond to known functional units, such as ecological niches in food webs, modules in biochemical networks (protein interaction networks, metabolic networks, or genetic regulatory networks), or communities in social networks [4, 5, 6, 7] . Here we present a general technique for inferring hierarchical structure from network data and demonstrate that the existence of hierarchy can simultaneously explain and quantitatively reproduce many commonly observed topological properties of networks, such as right-skewed degree distributions, high clustering coefficients, and short path lengths. We further show that knowledge of hierarchical structure can be used to predict missing connections in partially known networks with high accuracy, and for more general network structures than competing techniques [8] . Taken together, our results suggest that hierarchy is a central organizing principle of complex networks, capable of offering insight into many network phenomena. A great deal of recent work has been devoted to the study of clustering and community structure in networks [5, 6, 9, 10, 11] . Hierarchical structure goes beyond simple clustering, however, by explicitly including organization at all scales in a network simultaneously. Conventionally, hierarchical structure is represented by a tree or dendrogram in which closely related pairs of vertices have lowest common ancestors that are lower in the tree than those of more distantly related pairs-see Fig. 1 . We expect the probability of a connection between two vertices to depend on their degree of relatedness. Structure of this type can be modelled mathematically using a probabilistic approach in which we endow each internal node r of the dendrogram with a probability p r and then connect each pair of vertices for whom r is the lowest common ancestor independently with probability p r (Fig. 1) . This model, which we call a hierarchical random graph, is similar in spirit (although different in realization) to the treebased models used in some studies of network search and navigation [12, 13] . Like most work on community structure, it * This paper was published as Nature 453, 98 -101 (2008); doi:10.1038/nature06830. assumes that communities at each level of organization are disjoint. Overlapping communities have occasionally been studied (see, for example [14] ) and could be represented using a more elaborate probabilistic model, but as we discuss below the present model already captures many of the structural features of interest. Given a dendrogram and a set of probabilities p r , the hierarchical random graph model allows us to generate artificial networks with a specified hierarchical structure, a procedure that might be useful in certain situations. Our goal here, however, is a different one. We would like to detect and analyze the hierarchical structure, if any, of networks in the real world. We accomplish this by fitting the hierarchical model to observed network data using the tools of statistical inference, combining a maximum likelihood approach possible dendrograms. This technique allows us to sample hierarchical random graphs with probability proportional to the likelihood that they generate the observed network. To obtain the results described below we combine information from a large number of such samples, each of which is a reasonably likely model of the data. The success of this approach relies on the flexible nature of our hierarchical model, which allows us to fit a wide range of network structures. The traditional picture of communities or modules in a network, for example, corresponds to connections that are dense within groups of vertices and sparse between them-a behaviour called "assortativity" in the literature [17] . The hierarchical random graph can capture behaviour of this kind using probabilities p r that decrease as we move higher up the tree. Conversely, probabilities that increase as we move up the tree correspond to "disassortative" structures in which vertices are less likely to be connected on small scales than on large ones. By letting the p r values vary arbitrarily throughout the dendrogram, the hierarchical random graph can capture both assortative and disassortative structure, as well as arbitrary mixtures of the two, at all scales and in all parts of the network. To demonstrate our method we have used it to construct hierarchical decompositions of three example networks drawn from disparate fields: the metabolic network of the spirochete Treponema pallidum [18] , a network of associations between terrorists [19] , and a food web of grassland species [20] . To test whether these decompositions accurately capture the networks' important structural features, we use the sampled dendrograms to generate new networks, different in detail from the originals but, by definition, having similar hierarchical structure (see the Supplementary Information for more details). We find that these "resampled" networks match the statistical properties of the originals closely, including their degree distributions, clustering coefficients, and distributions of shortest path lengths between pairs of vertices, despite the fact that none of these properties is explicitly represented in the hierarchical random graph (Table I , and Fig. S3 in the Supplementary Information). Thus it appears that a network's hierarchical structure is capable of explaining a wide variety of other network features as well. The dendrograms produced by our method are also of interest in themselves, as a graphical representation and summary of the hierarchical structure of the observed network. As dis- . Note that in several cases, a set of parasitoids is grouped into a disassortative community by the algorithm, not because they prey on each other, but because they prey on the same herbivore. cussed above, our method can generates not just a single dendrogram but a set of dendrograms, each of which is a good fit to the data. From this set we can, using techniques from phylogeny reconstruction [21] , create a single consensus dendrogram, which captures the topological features that appear consistently across all or a large fraction of the dendrograms and typically represents a better summary of the network's structure than any individual dendrogram. Figure 2a shows such a consensus dendrogram for the grassland species network, which clearly reveals communities and sub-communities of plants, herbivores, parasitoids, and hyper-parasitoids. Another application of the hierarchical decomposition is the prediction of missing interactions in networks. In many settings, the discovery of interactions in a network requires significant experimental effort in the laboratory or the field. As a result, our current pictures of many networks are sub-3 stantially incomplete [22, 23, 24, 25, 26, 27, 28 ]. An attractive alternative to checking exhaustively for a connection between every pair of vertices in a network is to try to predict, in advance and based on the connections already observed, which vertices are most likely to be connected, so that scarce experimental resources can be focused on testing for those interactions. If our predictions are good, we can in this way reduce substantially the effort required to establish the network's topology. The hierarchical decomposition can be used as the basis for an effective method of predicting missing interactions as follows. Given an observed but incomplete network, we generate as described above a set of hierarchical random graphsdendrograms and the associated probabilities p r -that fit that network. Then we look for pairs of vertices that have a high average probability of connection within these hierarchical random graphs but which are unconnected in the observed network. These pairs we consider the most likely candidates for missing connections. (Technical details of the procedure are given in the Supplementary Information.) We demonstrate the method using our three example networks again. For each network we remove a subset of connections chosen uniformly at random and then attempt to predict, based on the remaining connections, which ones have been removed. A standard metric for quantifying the accuracy of prediction algorithms, commonly used in the medical and machine learning communities, is the AUC statistic, which is equivalent to the area under the receiver-operating characteristic (ROC) curve [29] . In the present context, the AUC statistic can be interpreted as the probability that a randomly chosen missing connection (a true positive) is given a higher score by our method than a randomly chosen pair of unconnected vertices (a true negative). Thus, the degree to which the AUC exceeds 1/2 indicates how much better our predictions are than chance. Figure 3 shows the AUC statistic for the three networks as a function of the fraction of the connections known to the algorithm. For all three networks our algorithm does far better than chance, indicating that hierarchy is a strong general predictor of missing structure. It is also instructive to compare the performance of our method to that of other methods for link prediction [8] . Previously proposed methods include assuming that vertices are likely to be connected if they have many common neighbours, if there are short paths between them, or if the product of their degrees is large. These approaches work well for strongly assortative networks such as the collaboration and citation networks [8] and for the metabolic and terrorist networks studied here (Fig. 3a,b) . Indeed, for the metabolic network the shortest-path heuristic performs better than our algorithm. However, these simple methods can be misleading for networks that exhibit more general types of structure. In food webs, for instance, pairs of predators often share prey species, but rarely prey on each other. In such situations a commonneighbour or shortest-path-based method would predict connections between predators where none exist. The hierarchical model, by contrast, is capable of expressing both assortative and disassortative structure and, as Fig. 3c shows, gives substantially better predictions for the grassland network. (In- deed, in Fig. 2b there are several groups of parasitoids that our algorithm has grouped together in a disassortative community, in which they prey on the same herbivore but not on each other.) The hierarchical method thus makes accurate predictions for a wider range of network structures than the previous methods. In the applications above, we have assumed for simplicity 4 that there are no false positives in our network data, i.e., that every observed edge corresponds to a real interaction. In networks where false positives may be present, however, they too could be predicted using the same approach: we would simply look for pairs of vertices that have a low average probability of connection within the hierarchical random graph but which are connected in the observed network. The method described here could also be extended to incorporate domain-specific information, such as species morphological or behavioural traits for food webs [28] or phylogenetic or binding-domain data for biochemical networks [23] , by adjusting the probabilities of edges accordingly. As the results above show, however, we can obtain good predictions even in the absence of such information, indicating that topology alone can provide rich insights. In closing, we note that our approach differs crucially from previous work on hierarchical structure in networks [1, 4, 5, 6, 7, 9, 11, 30] in that it acknowledges explicitly that most realworld networks have many plausible hierarchical representations of roughly equal likelihood. Previous work, by contrast, has typically sought a single hierarchical representation for a given network. By sampling an ensemble of dendrograms, our approach avoids over-fitting the data and allows us to explain many common topological features, generate resampled networks with similar structure to the original, derive a clear and concise summary of a network's structure via its consensus dendrogram, and accurately predict missing connections in a wide variety of situations.
Clauset et al. REF studied discovering hierarchy in undirected graphs, where given a dendrogram, the probability of an edge between two vertices is based on Erdős-Rényi model, with a probability depending on the lowest common ancestor in the dendrogram.
278058
Hierarchical structure and the prediction of missing links in networks
{ "venue": "Nature 453, 98 - 101 (2008)", "journal": null, "mag_field_of_study": [ "Biology", "Mathematics", "Physics", "Medicine" ] }
Abstract-Web services composition has been an active research area over the last few years. However, the technology is still not mature yet and several research issues need to be addressed. In this paper, we describe the design of CCAP, a system that provides tools for adaptive service composition and provisioning. We introduce a composition model where service context and exceptions are configurable to accommodate needs of different users. This allows for reusability of a service in different contexts and achieves a level of adaptiveness and contextualization without recoding and recompiling of the overall composed services. The execution semantics of the adaptive composite service is provided by an event-driven model. This execution model is based on Linda Tuple Spaces and supports real-time and asynchronous communication between services. Three core services, coordination service, context service, and event service, are implemented to automatically schedule and execute the component services, and adapt to user configured exceptions and contexts at run time. The proposed system provides an efficient and flexible support for specifying, deploying, and accessing adaptive composite services. We demonstrate the benefits of our system by conducting usability and performance studies.
CCAP REF is a system that provides support for configurable and adaptive service compositions aware of user context and different needs.
15576135
Configurable Composition and Adaptive Provisioning of Web Services
{ "venue": "IEEE Transactions on Services Computing", "journal": "IEEE Transactions on Services Computing", "mag_field_of_study": [ "Computer Science" ] }
Distantly-supervised Relation Extraction (RE) methods train an extractor by automatically aligning relation instances in a Knowledge Base (KB) with unstructured text. In addition to relation instances, KBs often contain other relevant side information, such as aliases of relations (e.g., founded and co-founded are aliases for the relation founderOfCompany). RE models usually ignore such readily available side information. In this paper, we propose RESIDE, a distantly-supervised neural relation extraction method which utilizes additional side information from KBs for improved relation extraction. It uses entity type and relation alias information for imposing soft constraints while predicting relations. RE-SIDE employs Graph Convolution Networks (GCN) to encode syntactic information from text and improves performance even when limited side information is available. Through extensive experiments on benchmark datasets, we demonstrate RESIDE's effectiveness. We have made RESIDE's source code available to encourage reproducible research.
REF proposed RE-SIDE, a distantly supervised neural relation extraction method which utilizes additional side information from knowledge bases for improving relation extraction.
53064621
RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information
{ "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at
U-Net REF is an end-to-end architecture used to segment different semantics of images, owing to skip connections, this method won the ISBI cell tracking challenge 2015 by using only 30 training images, outperforming the second best method by a large margin.
3719281
U-Net: Convolutional Networks for Biomedical Image Segmentation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Incorrect thread synchronization often leads to concurrency bugs that manifest nondeterministically and are difficult to detect and fix. Past work on detecting concurrency bugs has addressed the general problem in an ad-hoc fashion, focusing mostly on data races and atomicity violations. Using graphs to represent a multithreaded program execution is very natural, nodes represent static instructions and edges represent communication via shared memory. In this paper we make the fundamental observation that such basic context-oblivious graphs do not encode enough information to enable accurate bug detection. We propose context-aware communication graphs, a new kind of communication graph that encodes global ordering information by embedding communication contexts. We then build Bugaboo, a simple and generic framework that accurately detects complex concurrency bugs. Our framework collects communication graphs from multiple executions and uses invariant-based techniques to detect anomalies in the graphs. We built two versions of Bugaboo: BB-SW, which is fully implemented in software but suffers from significant slowdowns; and BB-HW, which relies on custom architecture support but has negligible performance degradation. BB-HW requires modest extensions to a commodity multicore processor and can be used in deployment settings. We evaluate both versions using applications such as MySQL, Apache, PARSEC, and several others. Our results show that Bugaboo identifies a wide variety of concurrency bugs, including challenging multivariable bugs, with few (often zero) unnecessary code inspections.
Bugaboo REF detects a wide variety of concurrency bugs by identifying rare communication patterns.
5809675
Finding concurrency bugs with context-aware communication graphs
{ "venue": "MICRO 42", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The increasing availability of GPS-enabled devices is changing the way people interact with the Web, and brings us a large amount of GPS trajectories representing people's location histories. In this paper, based on multiple users' GPS trajectories, we aim to mine interesting locations and classical travel sequences in a given geospatial region. Here, interesting locations mean the culturally important places, such as Tiananmen Square in Beijing, and frequented public areas, like shopping malls and restaurants, etc. Such information can help users understand surrounding locations, and would enable travel recommendation. In this work, we first model multiple individuals' location histories with a tree-based hierarchical graph (TBHG). Second, based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based inference model, which regards an individual's access on a location as a directed link from the user to that location. This model infers the interest of a location by taking into account the following three factors. 1) The interest of a location depends on not only the number of users visiting this location but also these users' travel experiences. 2) Users' travel experiences and location interests have a mutual reinforcement relationship. 3) The interest of a location and the travel experience of a user are relative values and are region-related. Third, we mine the classical travel sequences among locations considering the interests of these locations and users' travel experiences. We evaluated our system using a large GPS dataset collected by 107 users over a period of one year in the real world. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, when considering the users' travel experiences and location interests, we achieved a better performance beyond baselines, such as rankby-count and rank-by-interest, etc.
Zheng et al. REF mined interesting locations and classical travel sequences within a given geospatial region using the GPS trajectories generated by multiple users and provided the travel recommendation for mobile tourists.
6491073
Mining interesting locations and travel sequences from GPS trajectories
{ "venue": "WWW '09", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Enabling the diffusion of lightweight service composition approaches among end users necessitates the appropriate understanding and establishment of the correct user requirements that lead to development of easy to use and effective software platforms. To this end, a user-centric study which includes 15 participants is carried out to unravel users' mental models about software services and service composition, their working practices, and identify users' expectations and problems of service composition. Several examples and prototypes are used to steer this elicitation study, among which is a simple composition tool designed to support non-programmers to create interactive service-based applications in a lightweight and visual manner. Although a high user acceptance emerged in regard to "developing service-based applications by end users", there is evidence of a conceptual issue concerning understanding the notion of service composition (i.e. end users do not think about nor do they understand connections between services). This paper discusses various conceptual and usability problems of service composition and proposes recommendations to resolve them.
A study about users' expectations and usability problems of a composition environment for the the ServFace tool REF shows that there is evidence of a fundamental issue concerning conceptual understanding of service composition (i.e., end users do not think about connecting services).
15308409
Conceptual and usability issues in the composable web of software services
{ "venue": "ICWE Workshops", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods.
REF maximized the mutual information between the generated data and the latent codes by leveraging a network-adapted variational proposal distribution.
5002792
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We derive two variants of a semi-supervised model for fine-grained sentiment analysis. Both models leverage abundant natural supervision in the form of review ratings, as well as a small amount of manually crafted sentence labels, to learn sentence-level sentiment classifiers. The proposed model is a fusion of a fully supervised structured conditional model and its partially supervised counterpart. This allows for highly efficient estimation and inference algorithms with rich feature definitions. We describe the two variants as well as their component models and verify experimentally that both variants give significantly improved results for sentence-level sentiment analysis compared to all baselines.
REF combine fully and partially supervised structured conditional models for a joint classification of the polarity of whole reviews and review sentences.
8706134
Semi-supervised latent variable models for sentence-level sentiment analysis
{ "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-In the last few years, we have witnessed impressive demonstrations of aggressive flights and acrobatics using quadrotors. However, those robots are actually blind. They do not see by themselves, but through the "eyes" of an external motion capture system. Flight maneuvers using onboard sensors are still slow compared to those attainable with motion capture systems. At the current state, the agility of a robot is limited by the latency of its perception pipeline. To obtain more agile robots, we need to use faster sensors. In this paper, we present the first onboard perception system for 6-DOF localization during high-speed maneuvers using a Dynamic Vision Sensor (DVS). Unlike a standard CMOS camera, a DVS does not wastefully send full image frames at a fixed frame rate. Conversely, similar to the human eye, it only transmits pixel-level brightness changes at the time they occur with microsecond resolution, thus, offering the possibility to create a perception pipeline whose latency is negligible compared to the dynamics of the robot. We exploit these characteristics to estimate the pose of a quadrotor with respect to a known pattern during high-speed maneuvers, such as flips, with rotational speeds up to 1,200 • /s. Additionally, we provide a versatile method to capture ground-truth data using a DVS.
Robot localization in 6-DOF with respect to a map of B&W lines was demonstrated using a DVS, without additional sensing, during high-speed maneuvers of a quadrotor REF , where rotational speeds of up to 1,200
11454240
Event-based, 6-DOF pose tracking for high-speed maneuvers
{ "venue": "2014 IEEE/RSJ International Conference on Intelligent Robots and Systems", "journal": "2014 IEEE/RSJ International Conference on Intelligent Robots and Systems", "mag_field_of_study": [ "Computer Science" ] }
Informally, an obfuscator O is an (efficient, probabilistic) "compiler" that takes as input a program (or circuit) P and produces a new program O(P ) that has the same functionality as P yet is "unintelligible" in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice's theorem. Most of these applications are based on an interpretation of the "unintelligibility" condition in obfuscation as meaning that O(P ) is a "virtual black box," in the sense that anything one can efficiently compute given O(P ), one could also efficiently compute given oracle access to P . In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P that computes the same function as a program P ∈ P, the "source code" P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0 ). We also rule out several potential applications of obfuscators, by constructing "unobfuscatable" signature schemes, encryption schemes, and pseudorandom function families. In this work, we initiate a theoretical investigation of obfuscation. We examine various formalizations of the notion, in an attempt to understand what we can and cannot hope to achieve. Our main result is a negative one, showing that obfuscation (as it is typically understood) is impossible. Before describing this result and others in more detail, we outline some of the potential applications of obfuscators, both for motivation and to clarify the notion. Software Protection.. The most direct applications of obfuscators are for various forms of software protection. By definition, obfuscating a program protects it against reverse engineering. For example, if one party, Alice, discovers a more efficient algorithm for factoring integers, she may wish to sell another party, Bob, a program for apparently weaker tasks (such as breaking the RSA cryptosystem) that use the factoring algorithm as a subroutine without actually giving Bob a factoring algorithm. Alice could hope to achieve this by obfuscating the program she gives to Bob. Intuitively, obfuscators would also be useful in watermarking software (cf., [CT; NSS]). A software vendor could modify a program's behavior in a way that uniquely identifies the person to whom it is sold, and then obfuscate the program to guarantee that this "watermark" is difficult to remove. Removing Random Oracles.. The Random Oracle Model [BR] is an idealized cryptographic setting in which all parties have access to a truly random function. It is (heuristically) hoped that protocols designed in this model will remain secure when implemented using an efficient, publicly computable cryptographic hash function in place of the random function. While it is known that this is not true in general [CGH], it is unknown whether there exist efficiently computable functions with strong enough properties to be securely used in place of the random function in various specific protocols 2 . One might hope to obtain such functions by obfuscating a family of pseudorandom functions [GGM], whose input-output behavior is by definition indistinguishable from that of a truly random function. Transforming Private-Key Encryption into Public-Key Encryption.. Obfuscation can also be used to create new public-key encryption schemes by obfuscating a privatekey encryption scheme. Given a secret key K of a private-key encryption scheme, one can publish an obfuscation of the encryption algorithm Enc K . This allows everyone to encrypt, yet only one possessing the secret key K should be able to decrypt. Interestingly, in the original paper of Diffie and Hellman [DH], the above was the reason given to believe that public-key cryptosystems might exist even though there were no candidates known yet. That is, they suggested that it might be possible to obfuscate a private-key encryption scheme. 3 2 We note that the results of [CGH] can also be seen as ruling out a very strong "virtual black box" definition of obfuscators. This is because their result implies that no obfuscator applied to any pseudorandom function family could work for all protocols, while a very strong virtual black box definition would guarantee this. We note, however, that our main results rule out a seemingly much weaker definition of obfuscation. Also, we note that ruling out strong virtual black box definitions is almost immediate: For example, one thing that can be efficiently computed from O(P ) is the program O(P ) itself. However, for any program P corresponding to a function that is hard to learn from queries, it would be infeasible to produce any program equivalent to P in functionality given only oracle access to P . 3 From [DH]: "A more practical approach to finding a pair of easily computed inverse algorithms E and D; such that D is hard to infer from E, makes use of the difficulty of analyzing programs in low level languages. Anyone who has tried to determine what operation is accomplished by someone else's machine language program knows that E itself (i.e., what E does) can be hard to infer from an algorithm for E. If the program were to be made purposefully confusing through the addition of unneeded variables and statements, then Impossibility of Applications.. To give further evidence that our impossibility result is not an artifact of definitional choices, but rather is inherent in the "virtual black box" determining an inverse algorithm could be made very difficult. Of course, E must be complicated enough to prevent its identification from input-output pairs. Essentially what is required is a one-way compiler: one that takes an easily understood program written in a high level language and translates it into an incomprehensible program in some machine language. The compiler is one-way because it must be feasible to do the compilation, but infeasible to reverse the process. Since efficiency in size of program and run time are not crucial in this application, such compilers may be possible if the structure of the machine language can be optimized to assist in the confusion." A:5 idea, we also demonstrate that several of the applications of obfuscators are impossible. We do this by constructing unobfuscatable signature schemes, encryption schemes, and pseudorandom functions. These are objects satisfying the standard definitions of security, but for which one can efficiently compute the secret key K from any program that signs (or encrypts or evaluates the pseudorandom function, resp.) relative to K. Hence handing out "obfuscated forms" of these keyed-algorithms is highly insecure. In particular, we complement Canetti et. al.'s critique of the Random Oracle Methodology [CGH]. They show that there exist (contrived) protocols that are secure in the idealized Random Oracle Model (of [BR]), but are insecure when the random oracle is replaced with any (efficiently computable) function. Our results imply that for even for natural protocols that are secure in the random oracle model, (e.g., Fiat-Shamir type schemes [FS]), there exist (contrived) pseudorandom functions, such that these protocols are insecure when the random oracle is replaced with any program that computes the (contrived) pseudorandom function. We mention that, subsequent to our work, Barak [Bar1] constructed arguably natural protocols that are secure in the random oracle model (e.g. those obtained by applying the Fiat-Shamir heuristic [FS] to his public-coin zero-knowledge arguments) but are insecure when the random oracle is replaced by any efficiently computable function. Definition 2.1 (TM obfuscator). A probabilistic algorithm O is a TM obfuscator for the collection F of Turing machines if the following three conditions hold: 5 See Footnote 7.
Black-Box Obfuscation REF ) guarantees that every information that can be derived from the obfuscated program can also be learned given black-box access to the original program.
2409597
On the (Im)possibility of Obfuscating Programs
{ "venue": "IACR Cryptology ePrint Archive", "journal": "IACR Cryptology ePrint Archive", "mag_field_of_study": [ "Computer Science" ] }
Web service composition enables seamless and dynamic integration of business applications on the web. The performance of the composed application is determined by the performance of the involved web services. Therefore, nonfunctional, quality of service aspects are crucial for selecting the web services to take part in the composition. Identifying the best candidate web services from a set of functionallyequivalent services is a multi-criteria decision making problem. The selected services should optimize the overall QoS of the composed application, while satisfying all the constraints specified by the client on individual QoS parameters. In this paper, we propose an approach based on the notion of skyline to effectively and efficiently select services for composition, reducing the number of candidate services to be considered. We also discuss how a provider can improve its service to become more competitive and increase its potential of being included in composite applications. We evaluate our approach experimentally using both real and synthetically generated datasets.
To reduce the number of services to be considered, they further propose an approach based on the concept of skyline REF .
9573183
Selecting skyline services for QoS-based web service composition
{ "venue": "WWW '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Recommendation systems play a vital role to keep users engaged with personalized content in modern online platforms. Deep learning has revolutionized many research fields and there is a recent surge of interest in applying it to collaborative filtering (CF). However, existing methods compose deep learning architectures with the latent factor model ignoring a major class of CF models, neighborhood or memory-based approaches. We propose Collaborative Memory Networks (CMN), a deep architecture to unify the two classes of CF models capitalizing on the strengths of the global structure of latent factor model and local neighborhood-based structure in a nonlinear fashion. Motivated by the success of Memory Networks, we fuse a memory component and neural attention mechanism as the neighborhood component. The associative addressing scheme with the user and item memories in the memory module encodes complex user-item relations coupled with the neural attention mechanism to learn a user-item specific neighborhood. Finally, the output module jointly exploits the neighborhood with the user and item memories to produce the ranking score. Stacking multiple memory modules together yield deeper architectures capturing increasingly complex user-item relations. Furthermore, we show strong connections between CMN components, memory networks and the three classes of CF models. Comprehensive experimental results demonstrate the effectiveness of CMN on three public datasets outperforming competitive baselines. Qualitative visualization of the attention weights provide insight into the model's recommendation process and suggest the presence of higher order interactions.
The most closely related work to our work is recently proposed (Collaborative Memory Network (CMN) REF ).
13756507
Collaborative Memory Network for Recommendation Systems
{ "venue": "SIGIR '18", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. In this paper, we propose a data mining technique for finding frequently used web pages. These pages may be kept in a server's cache to speed up web access. Existing techniques of selecting pages to be cached do not capture a user's surfing patterns correctly. We use a Weighted Association Rule (WAR) mining technique that finds pages of the user's current interest and cache them to give faster net access. This approach captures both user's habit and interest as compared to other approaches where emphasis is only on habit.
Here, they use a weighted association rule (WAR) mining technique that finds pages of the user's current interest and cache them to give faster net access REF .
17256640
Speeding up web access using weighted association rules
{ "venue": "PReMI", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
ABSTRACT Social recommender is an active research area. Most previous social recommenders adopt existing social networks to augment recommendations which are based on user preferences. In this contribution, we propose to simultaneously infer the social influence network and the user preferences in a matrix factorization framework. Furthermore, we assume that the influence strength is dependent on the social roles of users. We present an incremental clustering algorithm to detect dynamic social roles. Comprehensive experiments on real data sets demonstrate the efficiency and effectiveness of our model to generate precise recommendations. INDEX TERMS Social role, recommender system, social recommendation, matrix factorization.
Based on the assumption that the influence strength is dependent on the social roles of users, the authors of REF propose an incremental clustering algorithm to detect dynamic social roles.
49869823
Recommendation With Social Roles
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Measuremeut of the extent of diabetic retinopathy is an essential part of assessing the efficacy of local or systemic treatment regimens. Current clinical studies use empiri cal grading of retinopathy which is performed by a trained observer using standard photographs. This method is relatively arbitrary, as well as time consuming and vul nerable to observer error. We have developed a digital fundus imaging system and image processing pro grams which provide objective, quantitative measures of macular oedema, retinal exudates, and microaneurysms in diabetic retinopathy. Using fluorescein angio grams, the degree of macular oedema is quantified both in terms of area of fundus involved and severity of oedema by analysis of the temporal changes in intensity of fluorescence. Fluorescein angiograms are also used for the detection and counting of microaneurysms, by a combination of shade correction, matched filtering, and shape algorithms. For detection and measurement of retinal exudates, a colour transparency projected through a red free filter is analysed using a combination of shade correction and thresholding techniques. The system described is in clinical use, and has potential for a wide variety of applications. With further development, digital analysis of fundus images should supercede the currently used manual semi-quantitative methods, providing faster, more accurate, objective quantitative results.
Phillips et al. REF have proposed a method for the quantification of diabetic maculopathy using fluorescein angiograms.
12198554
Quantification of diabetic maculopathy by digital imaging of the fundus
{ "venue": "Eye", "journal": "Eye", "mag_field_of_study": [ "Medicine" ] }
This paper presents a supervised approach for identifying generic noun phrases in context. Generic statements express rulelike knowledge about kinds or events. Therefore, their identification is important for the automatic construction of knowledge bases. In particular, the distinction between generic and non-generic statements is crucial for the correct encoding of generic and instance-level information. Generic expressions have been studied extensively in formal semantics. Building on this work, we explore a corpus-based learning approach for identifying generic NPs, using selections of linguistically motivated features. Our results perform well above the baseline and existing prior work.
REF use a wide range of syntactic and semantic features to train a supervised classifier for identifying generic NPs.
5325335
Identifying Generic Noun Phrases
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.
Sharma et al. REF proposed a soft-attention LSTM on top of the RNNs to pay attention to salient parts of the video frames for classification.
362506
Action Recognition using Visual Attention
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
ABSTRACT Vehicular ad hoc network (VANET) is a technology that enables smart vehicles to communicate with each other and form a mobile network. VANET facilitates users with improved traffic efficiency and safety. Authenticated communication becomes one of the prime requirements of VANET. However, authentication may reveal a user's personal information such as identity or location, and therefore, the privacy of an honest user must be protected. This paper proposes an efficient and practical pseudonymous authentication protocol with conditional privacy preservation. Our protocol proposes a hierarchy of pseudonyms based on the time period of their usage. We propose the idea of primary pseudonyms with relatively longer time periods that are used to communicate with semi-trusted authorities and secondary pseudonyms with a smaller life time that are used to communicate with other vehicles. Most of the current pseudonym-based approaches are based on certificate revocation list (CRL) that causes significant communication and storage overhead or group-based approaches that are computationally expensive and suffer from group-management issues. These schemes also suffer from trust issues related to certification authority. Our protocol only expects an honest-but-curious behavior from otherwise fully trusted authorities. Our proposed protocol protects a user's privacy until the user honestly follows the protocol. In case of a malicious activity, the true identity of the user is revealed to the appropriate authorities. Our protocol does not require maintaining a CRL and the inherent mechanism assures the receiver that the message and corresponding pseudonym are safe and authentic. We thoroughly examined our protocol to show its resilience against various attacks and provide computational as well as communicational overhead analysis to show its efficiency and robustness. Furthermore, we simulated our protocol in order to analyze the network performance and the results show the feasibility of our proposed protocol in terms of end-to-end delay and packet delivery ratio. INDEX TERMS Vehicular adhoc network, authentication, privacy, pseudonyms.
To overcome the communication overhead due to the CRL, REF presented a hierarchy of pseudonyms for semi-trusted multi-authority VANET to preserve the privacy of the user.
5800761
A Hierarchical Privacy Preserving Pseudonymous Authentication Protocol for VANET
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Person re-identification (Re-ID) aims to match person images captured from two non-overlapping cameras. In this paper, a deep hybrid similarity learning (DHSL) method for person Re-ID based on a convolution neural network (CNN) is proposed. In our approach, a light CNN learning feature pair for the input image pair is simultaneously extracted. Then, both the elementwise absolute difference and multiplication of the CNN learning feature pair are calculated. Finally, a hybrid similarity function is designed to measure the similarity between the feature pair, which is realized by learning a group of weight coefficients to project the elementwise absolute difference and multiplication into a similarity score. Consequently, the proposed DHSL method is able to reasonably assign complexities of feature learning and metric learning in a CNN, so that the performance of person Re-ID is improved. Experiments on three challenging person Re-ID databases, QMUL GRID, VIPeR, and CUHK03, illustrate that the proposed DHSL method is superior to multiple state-of-the-art person Re-ID methods. Index Terms-Metric learning, convolution neural network, deep hybrid similarity learning, person re-identification (Re-ID).
Zhu et al. REF propose a deep hybrid similarity learning (DHSL) method to match person images.
6882063
Deep Hybrid Similarity Learning for Person Re-Identification
{ "venue": "IEEE Transactions on Circuits and Systems for Video Technology", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Distributed mobile crowd sensing is becoming a valuable paradigm, enabling a variety of novel applications built on mobile networks and smart devices. However, this trend brings several challenges, including the need for crowdsourcing platforms to manage interactions between applications and the crowd (participants or workers). One of the key functions of such platforms is spatial task assignment which assigns sensing tasks to participants based on their locations. Task assignment becomes critical when participants are hesitant to share their locations due to privacy concerns. In this paper, we examine the problem of spatial task assignment in crowd sensing when participants utilize spatial cloaking to obfuscate their locations. We investigate methods for assigning sensing tasks to participants, efficiently managing location uncertainty and resource constraints. We propose a novel two-stage optimization approach which consists of global optimization using cloaked locations followed by a local optimization using participants' precise locations without breaching privacy. Experimental results using both synthetic and real data show that our methods achieve high sensing coverage with low cost using cloaked locations.
A novel two-stage optimization approach is designed to protect the location privacy in the spatial task assignment via utilizing cloaked locations in the global optimization while the precise locations are adopted in a subsequent local optimization REF .
1476603
Spatial Task Assignment for Crowd Sensing with Cloaked Locations
{ "venue": "2014 IEEE 15th International Conference on Mobile Data Management", "journal": "2014 IEEE 15th International Conference on Mobile Data Management", "mag_field_of_study": [ "Computer Science" ] }
Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet
SqueezeNet REF proposes fire modules and achieves AlexNet-level accuracy with 50× fewer parameters.
14136028
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Many data management applications, such as setting up Web portals, managing enterprise data, managing community data, and sharing scientific data, require integrating data from multiple sources. Each of these sources provides a set of values and different sources can often provide conflicting values. To present quality data to users, it is critical that data integration systems can resolve conflicts and discover true values. Typically, we expect a true value to be provided by more sources than any particular false one, so we can take the value provided by the majority of the sources as the truth. Unfortunately, a false value can be spread through copying and that makes truth discovery extremely tricky. In this paper, we consider how to find true values from conflicting information when there are a large number of sources, among which some may copy from others. We present a novel approach that considers dependence between data sources in truth discovery. Intuitively, if two data sources provide a large number of common values and many of these values are rarely provided by other sources (e.g., particular false values), it is very likely that one copies from the other. We apply Bayesian analysis to decide dependence between sources and design an algorithm that iteratively detects dependence and discovers truth from conflicting information. We also extend our model by considering accuracy of data sources and similarity between values. Our experiments on synthetic data as well as real-world data show that our algorithm can significantly improve accuracy of truth discovery and is scalable when there are a large number of data sources.
Recently, advanced techniques have been proposed to consider accuracy of and dependence between sources in conflict resolution REF .
9664056
Integrating Conflicting Data: The Role of Source Dependence
{ "venue": "PVLDB", "journal": "PVLDB", "mag_field_of_study": [ "Computer Science" ] }
This paper gives the main definitions relating to dependability, a generic concept including as special case such attributes as reliability, availability, safety, confidentiality, integrity, maintainability, etc. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability (faults, errors, failures), and the attributes of dependability. The discussion on the attributes encompasses the relationship of dependability with security, survivability and trustworthiness. Key words: Dependability, availability, reliability, safety, confidentiality, integrity, maintainability, security, survivability, trustworthiness, faults, errors, failures. The delivery of correct computing and communication services has been a concern of their providers and users since the earliest days. In the July 1834 issue of the Edinburgh Review, Dr. Dionysius Lardner published the article "Babbage's calculating engine", in which he wrote: "The most certain and effectual check upon errors which arise in the process of computation, is to cause the same computations to be made by separate and independent computers; and this check is rendered still more decisive if they make their computations by different methods". 2 It must be noted that the term "computer" in the previous quotation refers to a person who performs computations, and not the "calculating engine". The first generation of electronic computers (late 1940's to mid-50's) used rather unreliable components, therefore practical techniques were employed to improve their reliability, such as error control codes, duplexing with comparison, triplication with voting, diagnostics to locate failed components, etc. At the same time J. von Neumann [von Neumann 1956] , E. F. Moore and C. E. Shannon [Moore & Shannon 1956] , and their successors developed theories of using redundancy to build reliable logic structures from less reliable components, whose faults were masked by the presence of multiple redundant components. The theories of masking redundancy were unified by W. H. Pierce as the concept of failure tolerance in 1965 [Pierce 1965] . In 1967, A. Avižienis integrated masking with the practical techniques of error detection, fault diagnosis, and recovery into the concept of faulttolerant systems [Avižienis 1967 ]. In the reliability modeling field, the major event was the introduction of the coverage concept by Bouricius, Carter and Schneider [Bouricius et al. 1969] . Work on software fault tolerance was initiated by Elmendorf [Elmendorf 1972] , later it was complemented by recovery blocks [Randell 1975] , and by N-version programming [Avižienis & Chen, 1977] . [Laprie 1992], in which the English text was also translated into French, German, Italian, and Japanese. In this book, intentional faults (malicious logic, intrusions) were listed along with accidental faults (physical, design, or interaction faults). Exploratory research on the integration of fault tolerance and the defenses against deliberately malicious faults, i.e., security threats, was started in the mid-80's [Dobson & Randell 1986] , [Joseph & Avižienis 1988] , [Fray et al. 1986] . The first IFIP Working Conference on Dependable Computing for Critical Applications (DCCA) was held in 1989. This and the six Working Conferences that followed fostered the interaction of the dependability and security communities, and advanced the integration of security (confidentiality, integrity and availability) into the framework of dependable computing. Since 2000, the DCCA Working Conference together with the FTCS became parts of the International Conference on Dependable Systems and Networks (DSN). Dependability and Its Threats: A Taxonomy In this section we present a basic set of definitions (in bold typeface) that will be used throughout the entire discussion of the taxonomy of dependable computing. The definitions are general enough to cover the entire range of computing and communication systems, from individual logic gates to networks of computers with human operators and users. A system in our taxonomy is an entity that interacts with other entities, i.e., other systems, including hardware, software, humans, and the physical world with its natural phenomena. These other systems are the environment of the given system. The system boundary is the common frontier between the system and its environment. Computing and communication systems are characterized by four fundamental properties: functionality, performance, dependability, and cost. Those four properties are collectively influenced by two other properties: usability and adaptability. The function of such a system is what the system is intended to do and is described by the functional specification in terms of functionality and performance. Dependability and cost have separate specifications. The behavior of a system is what the system does to implement its function and is described by a sequence of states. The total state of a given system is the set of the following states: computation, communication, stored information, interconnection, and physical condition. The structure of a system is what enables it to generate the behavior. From a structural viewpoint, a system is a set of components bound together in order to interact, where each component is another system, etc. The recursion stops when a component is considered to be atomic: any further internal structure cannot be discerned, or is not of interest and can be ignored. The service delivered by a system (the provider) is its behavior as it is perceived by its user(s); a user is another system that receives service from the provider. The part of the provider's system boundary where service delivery takes place is the service interface. The part of the provider's total state that is perceivable at the service interface is its external state; the remaining part is its internal state. The delivered service is a sequence of the provider's external states. We note that a system may sequentially or simultaneously be a provider and a user with respect to another system, i.e., deliver service to and receive service from that other system. It is usual to have a hierarchical view of a system structure. The relation is composed of, or is decomposed into, induces a hierarchy; however it relates only to the list of the system components. A hierarchy that takes into account the system behavior is the relation uses [Parnas 1974, Ghezzi et al. We have up to now used the singular for function and service. A system generally implements more than one function, and delivers more than one service. Function and service can be thus seen as composed of function items and of service items. For the sake of simplicity, we shall simply use the plural -functions, services -when it is necessary to distinguish several function or service items. Correct service is delivered when the service implements the system function. A service failure is an event that occurs when the delivered service deviates from correct service. A service fails either because it does not comply with the functional specification, or because this specification did not adequately describe the system function. A service failure is a transition from correct service to incorrect service, i.e., to not implementing the system function. The period of delivery of incorrect service is a service outage. The transition from incorrect service to correct service is a service restoration. The deviation from correct service may assume different forms that are called service failure modes and are ranked according to failure severities. A detailed taxonomy of failure modes is presented in Section 4. Since a service is a sequence of the system's external states, a service failure means that at least one (or more) external state of the system deviates from the correct service state. The deviation is called an error. The adjudged or hypothesized cause of an error is called a fault. In most cases a fault first causes an error in the service state of a component that is a part of the internal state of the system and the external state is not immediately affected. For this reason the definition of an error is: the part of the total state of the system that may lead to its subsequent service failure. It is important to note that many errors do not reach the system's external state and cause a failure. A fault is active when it causes an error, otherwise it is dormant. When the functional specification of a system includes a set of several functions, the failure of one or more of the services implementing the functions may leave the system in a degraded mode that still offers a subset of needed services to the user. The specification may identify several such modes, e.g., slow service, limited service, emergency service, etc. Here we say that the system has suffered a partial failure of its functionality or performance. Development failures and dependability failures that are discussed in Section 4 also can be partial failures. Dependability and Its Threats: A Taxonomy5 The general, qualitative, definition of dependability is: the ability to deliver service that can justifiably be trusted. This definition stresses the need for justification of trust. The alternate, quantitative, definition that provides the criterion for deciding if the service is dependable is: dependability of a system is the ability to avoid service failures that are more frequent and more severe than is acceptable to the user(s). As developed over the past three decades, dependability is an integrating concept that encompasses the following attributes: • availability: readiness for correct service; • reliability: continuity of correct service; • safety: absence of catastrophic consequences on the user(s) and the environment; • confidentiality: absence of unauthorized disclosure of information; • integrity: absence of improper system alterations; • maintainability: ability to undergo, modifications, and repairs. Security is the concurrent existence of a) availability for authorized users only, b) confidentiality, and c) integrity with 'improper' meaning 'unauthorized'. The dependability specification of a system must include the requirements for the dependability attributes in terms of the acceptable frequency and severity of failures for the specified classes of faults and a given use environment. One or more attributes may not be required at all for a given system. The taxonomy of the attributes of dependability is presented in Section 5. Over the course of the past fifty years many means to attain the attributes of dependability have been developed. Those means can be grouped into four major categories: • fault prevention: means to prevent the occurrence or introduction of faults; • fault tolerance: means to avoid service failures in the presence of faults; • fault removal: means to reduce the number and severity of faults; • fault forecasting: means to estimate the present number, the future incidence, and the likely consequences of faults. Fault prevention and fault tolerance aim to provide the ability to deliver a service that can be trusted, while fault removal and fault forecasting aim to reach confidence in that ability by justifying that the functional and dependability specifications are adequate and that the system is likely to meet them. THE TAXONOMY OF FAULTS In this and the next section we present the taxonomy of threats that may affect a system during its entire life. The life cycle of a system consists of two phases: development and use. The development phase includes all activities from presentation of the user's initial concept to the decision that the system has passed all acceptance tests and is ready to be deployed for use in its user's environment. During the development phase the system is interacting with the development environment and development faults may be introduced into the system by the environment. The development environment of a system consists of the following elements: 1. the physical world with its natural phenomena; 2. human developers, some possibly lacking competence or having malicious objectives; 3. development tools: software and hardware used by the developers to assist them in the development process; 4. production and test facilities. The use phase of a system's life begins when the system is accepted for use and starts the delivery of its services to the users. Use consists of alternating periods of correct service delivery (to be called service delivery), Taxonomy 7 service outage, and service shutdown. A service outage is caused by a service failure. It is the period when incorrect service (including no service at all) is delivered at the service interface. A service shutdown is an intentional halt of service by an authorized entity. Maintenance actions may take place during all three periods of the use phase. During the use phase the system interacts with its use environment and may be adversely affected by faults originating in it. The use environment consists of the following elements: 1. the physical world with its natural phenomena; 2. the administrators (including maintainers): entities (humans, other systems) that have the authority to manage, modify, repair and use the system; some authorized humans may lack competence or have malicious objectives; 3. the users: entities that receive service at the service interfaces; 4. the providers: entities that deliver services to the system at its service interfaces; 5. the fixed resources: entities that are not users, but provide specialized services to the system, such as information sources (e.g., GPS, time, etc.), communication links, power sources, cooling airflow, etc. 6. the intruders: malicious entities that have no authority but attempt to intrude into the system and to alter service or halt it, alter the system's functionality or performance, or to access confidential information. They are hackers, malicious insiders, agents of hostile governments or organizations, and info-terrorists. As used here, the term maintenance, following common usage, includes not only repairs, but also all modifications of the system that take place during the use phase of system life. Therefore maintenance is a development process, and the preceding discussion of development applies to maintenance as well. The various forms of maintenance are summarized in It is noteworthy that repair and fault tolerance are related concepts; the distinction between fault tolerance and maintenance in this paper is that maintenance involves the participation of an external agent, e.g., a repairman, test equipment, remote reloading of software. Furthermore, repair is part of fault removal (during the use phase), and fault forecasting usually considers repair situations. All faults that may affect a system during its life are classified according to eight basic viewpoints that are shown in Figure 3. The classification criteria are as follows: 1. The phase of system life during which the faults originate: • development faults that occur during (a) system development, (b) maintenance during the use phase, and (c) generation of procedures to operate or to maintain the system; • operational faults that occur during service delivery of the use phase. 2. The location of the faults with respect to the system boundary: • internal faults that originate inside the system boundary; • external faults that originate outside the system boundary and propagate errors into the system by interaction or interference. • natural faults that are caused by natural phenomena without human participation; • human-made faults that result from human actions. Taxonomy 9 4. The dimension in which the faults originate: • hardware (physical) faults that originate in, or affect, hardware; • software (information) faults that affect software, i.e., programs or data. • malicious faults that are introduced by a human with the malicious objective of causing harm to the system; • non-malicious faults that are introduced without a malicious objective. 6. The intent of the human(s) who caused the faults: • deliberate faults that are the result of a harmful decision; • non-deliberate faults that are introduced without awareness. 7. The capacity of the human(s) who introduced the faults: • accidental faults that are introduced inadvertently; • incompetence faults that result from lack of professional competence by the authorized human(s), or from inadequacy of the development organization. 8. The temporal persistence of the faults: • permanent faults whose presence is assumed to be continuous in time; • transient faults whose presence is bounded in time. If all combinations of the eight elementary fault classes were possible, there would be 256 different combined fault classes. In fact, the number of likely combinations is 31; they are shown in Figures 3.3 The combined faults of Figures 3.3 and 3.4 are shown to belong to three major partially overlapping groupings: • development faults that include all fault classes occurring during development; • physical faults that include all fault classes that affect hardware; • interaction faults that include all external faults. The boxes at the bottom of Figure 3 .3 identify the names of some illustrative fault classes. The definition of human-made faults (that result from harmful human actions) includes absence of actions when actions should be performed, i.e., omission faults, or simply omissions. Performing wrong actions leads to commission faults. The two basic classes of human-made faults are distinguished by the objective of the developer or of the humans interacting with the system during its use: • malicious faults, introduced during either system development with the intent to cause harm to the system during its use (#5-#6), or directly during use (#22-#25) Dependability and Its Threats: A Taxonomy • non-malicious faults (#1-#4, #7-#21, #26-#31), introduced without malicious objectives. Malicious human-made faults are introduced by a developer with the malicious objective to alter the functioning of the system during use. The goals of such faults are: (1) to disrupt or halt service, (2) to access confidential information, or (3) to improperly modify the system. They are grouped into two classes: • potentially harmful components (#5, #6): Trojan horses, trapdoors, logic or timing bombs; • deliberately introduced software or hardware vulnerabilities or humanmade faults.. The goals of malicious faults are: (1) to disrupt or halt service (thus provoking denials-of-service), (2) to access confidential information, or (3) to improperly modify the system. They fall into two classes: Deliberate, non-malicious, development faults result generally from tradeoffs, either a) aimed at preserving acceptable performance, at facilitating system utilization, or b) induced by economic considerations. Deliberate, non-malicious interaction faults may result from the action of an operator either aimed at overcoming an unforeseen situation, or deliberately violating an operating procedure without having realized the possibly damaging consequences of this action. Deliberate non-malicious faults share the property that often it is recognized that they were faults only after an unacceptable system behavior, thus a failure, has ensued; the developer(s) or operator(s) did not realize that the consequence of their decision was a fault. It is often considered that both mistakes and bad decisions are accidental, as long as they are not made with malicious objectives. However, not all mistakes and bad decisions by non-malicious persons are accidents. Some very harmful mistakes and very bad decisions are made by persons who lack professional competence to do the job they have undertaken. A complete fault taxonomy should not conceal this cause of faults, therefore we introduce a further partitioning of both classes of non-malicious humanmade faults into (1) accidental faults, and (2) incompetence faults. The structure of this human-made fault taxonomy is shown in Figure 3 .5.
According to Avizienis et al. REF dependability is a superordinate concept regrouping different system attributes such as reliability, safety, security, or availability and nonfunctional requirements for modern embedded systems.
14893317
Dependability and Its Threats: A Taxonomy
{ "venue": "IFIP Congress Topical Sessions", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas. I. Bergel is with
Downlink throughput scaling behavior was investigated in REF , where it was shown that unused uplink throughput can be used to trade off for downlink throughput and that the downlink throughput is proportional to the logarithm of the number of base-station antennas.
1301331
Uplink Downlink Rate Balancing and throughput scaling in FDD Massive MIMO Systems
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23] [24] . Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset [26] . * Work done as an intern at Zoox, Inc.
On the monocular side, REF proposes to estimate 3D bounding boxes with relation and constraints between 2D and 3D bounding boxes.
8694036
3D Bounding Box Estimation Using Deep Learning and Geometry
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input. The code is available at https://github.com/sshaoshuai/PointRCNN.
Shi et al. REF proposed the PointRCNN architecture to directly generate 3D proposals from raw point cloud by segmenting the foreground points and refine them in the canonical coordinates.
54607410
PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud
{ "venue": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
In this paper, we present a novel approach to predict crime in a geographic space from multiple data sources, in particular mobile phone and demographic data. The main contribution of the proposed approach lies in using aggregated and anonymized human behavioral data derived from mobile network activity to tackle the crime prediction problem. While previous research efforts have used either background historical knowledge or offenders' profiling, our findings support the hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime. In our experimental results with real crime data from London we obtain an accuracy of almost 70% when predicting whether a specific area in the city will be a crime hotspot or not. Moreover, we provide a discussion of the implications of our findings for data-driven crime analysis.
al. REF use human behavioral data derived from mobile network and demographic sources, together with open crime data to predict crime hotspots.
746921
Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile Data
{ "venue": null, "journal": "Proceedings of the 16th International Conference on Multimodal Interaction", "mag_field_of_study": [ "Computer Science", "Physics" ] }
Domain adaptation aims at training a classifier in one dataset and applying it to a related but not identical dataset. One successfully used framework of domain adaptation is to learn a transformation to match both the distribution of the features (marginal distribution), and the distribution of the labels given features (conditional distribution). In this paper, we propose a new domain adaptation framework named Deep Transfer Network (DTN), where the highly flexible deep neural networks are used to implement such a distribution matching process. This is achieved by two types of layers in DTN: the shared feature extraction layers which learn a shared feature subspace in which the marginal distributions of the source and the target samples are drawn close, and the discrimination layers which match conditional distributions by classifier transduction. We also show that DTN has a computation complexity linear to the number of training samples, making it suitable to large-scale problems. By combining the best paradigms in both worlds (deep neural networks in recognition, and matching marginal and conditional distributions in domain adaptation), we demonstrate by extensive experiments that DTN improves significantly over former methods in both execution time and classification accuracy.
Deep Transfer Network (DTN) REF employs a deep neural network to model and match both the domains marginal and conditional distributions.
14814410
Deep Transfer Network: Unsupervised Domain Adaptation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Active learning reduces the labeling cost by iteratively selecting the most valuable data to query their labels. It has attracted a lot of interests given the abundance of unlabeled data and the high cost of labeling. Most active learning approaches select either informative or representative unlabeled instances to query their labels, which could significantly limit their performance. Although several active learning algorithms were proposed to combine the two query selection criteria, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this limitation by developing a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an unlabeled instance. Further, by incorporating the correlation among labels, we extend the QUIRE approach to multi-label learning by actively querying instance-label pairs. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of-the-art active learning approaches in both single-label and multi-label learning.
More recent approaches like QUIRE REF selects instances which are informative and representative of the unlabeled dataset.
8326832
Active Learning by Querying Informative and Representative Examples
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instancedecay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.
Koren REF proposed a method to model the time changing behavior throughout the life span of the data and improved the performance of recommendation.
3022077
Collaborative filtering with temporal dynamics
{ "venue": "KDD", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.
Another GAN based domain adaptation method is proposed in REF where the network produce source-like images from the source and target embeddings.
4547917
Generate to Adapt: Aligning Domains Using Generative Adversarial Networks
{ "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "journal": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
In this paper, we propose a geometry-contrastive generative adversarial network GC-GAN for generating facial expression images conditioned on geometry information. Specifically, given an input face and a target expression designated by a set of facial landmarks, an identity-preserving face can be generated guided by the target expression. In order to embed facial geometry onto a semantic manifold, we incorporate contrastive learning into conditional GANs. Experiment results demonstrate that the manifold is sensitive to the changes of facial geometry both globally and locally. Benefited from the semantic manifold, dynamic smooth transitions between different facial expressions are exhibited via geometry interpolation. Furthermore, our method can also be applied in facial expression transfer even there exist big differences in face shape between target faces and driving faces.
In the same direction, Qiao et al. REF , proposed a Geometry-Contrastive GAN (GC-GAN) to transfer facial expressions across different identities using face geometry.
3652071
Geometry-Contrastive Generative Adversarial Network for Facial Expression Synthesis
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-In wireless mesh networks, the end-to-end throughput of traffic flows depends on the path length, i.e. the higher the number of hops, the lower becomes the throughput. In this paper, a Fair End-to-end Bandwidth Allocation (FEBA) algorithm is introduced to solve this problem. FEBA is implemented at the Medium Access Control (MAC) layer of single-radio, multiple channels IEEE 802.16 mesh nodes, operated in a distributed coordinated scheduling mode. FEBA negotiates bandwidth among neighbors to assign a fair share to each end-to-end traffic flow. This is carried out in two steps. First, bandwidth is requested and granted in a round-robin fashion where heavily loaded links are provided with a proportionally higher amount of service than the lightly loaded links at each round. Second, at each output link, packets from different traffic flows are buffered in separate queues which are served by the Deficit Round Robin (DRR) scheduling algorithm. If multiple channels are available, all of them are shared evenly in order to increase the network capacity due to frequency reuse. The performance of FEBA is evaluated by extensive simulations and is shown to provide fairness by balancing the bandwidth among traffic flows.
Cicconetti, I. F. Akyildiz, and L. Lenzini REF propose a Fair End-to-end Bandwidth Allocation (FEBA) algorithm for IEEE 802.16 nodes to negotiate bandwidth in a multi-channel environment.
7488933
Bandwidth Balancing in Multi-Channel IEEE 802.16 Wireless Mesh Networks
{ "venue": "IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications", "journal": "IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Untraceability of vehicles is an important requirement in future vehicle communications systems . Unfortunately, heartbeat messages used by many safety applications provide a constant st ream of location data, and without any protection measures, they make tracking of vehicles easy even for a passive eavesdropper. One commonly known solution is to transmit heartbeats under pseudonyms that are changed regularly in order to obfuscate the trajectory of vehicles. However, this approach is effective only if some silent period is kept during the pseudonym change and several vehicles change their pseudonyms nearly at the same time and at the same location. Unlike previous works that proposed explicit synchronization between a group of vehicles and/or required pseudonym change in a designated physical area (i.e., a static mix zone), we propose a much simpler approach that does not need any explicit cooperation between vehicles and any infrastructure support. Our basic idea is that vehicles should not transmit heartbeat messages when their speed drops below a given threshold, say 30 km/h, and they should change pseudonym during each such silent period. This ensures that vehicles stopping at traffic lights or moving slowly in a traffic jam will all refrain from transmitting heartbeats and change their pseudonyms nearly at the same time and location. Thus, our scheme ensures both silent periods and synchronized pseudonym change in time and space, but it does so in an implicit way. We also argue that the risk of a fatal accident at a slow speed is low, and therefore, our scheme does not seriously impact safetyof-life. In addition, refraining from sending heartbeat messages when moving at low speed also relieves vehicles of the burden of verifying a potentially large amount of digital signatures, and thus, makes it possible to implement vehicle communications with less expensive equipments. . As deployment decision points for these projects draw nearer, the provision of adequate security mechanisms will be an important consideration for policy-makers. In addition to the usual security requirements of confidentiality, authentication and integrity, VANET security typically presents an additional requirement, that of privacy. Informally, the privacy requirement represents a user's expectation that only appropriately authorized parties will be able to determine where he or she was at a given time. This informal definition may be formalized in many ways, and the definition of appropriately authorized parties may vary according to the circumstances and from jurisdi ction to jurisdiction (or a user may expect that no entity can track them at all). As messages sent by the vehicles within the VANET may contain meta-information that endangers the privacy of the drivers, vehicle communication systems must satisfy the following two properties: pseudonymity and unlinkability. Pseudonymity means that identifiers in a message do not directly refer to the sender of the message, so an eavesdropper cannot easily determine the real identity of the sender. Unlinkability means that it is made difficult for an attacker to determine that two messages have come from the same vehicle. This second property is necessary to preserve privacy in the sense of our informal statement above because a physical observation of a vehicle at point A, and the ability to link its transmissions at A to transmissions at B, would allow an attacker to determine that the vehicle had also been at point B. Note that we do not address short-term linkability which is required in order to implement vehicle safety applications. Making the security subsystem designer's job more complicated, most proposed V2X communications systems make use of an additional type of highly privacy-threatening message, known as the heartbeat (in America) or beacon (in Europe) message (see [5] for an example). This message is sent with a high frequency (10Hz is often recommended) and contains the vehicle's current position and velocity, in order to improve the information that other drivers have about the traffic conditions in their immediate vicinity. An attacker can therefore attempt to trace a vehicle, and thereby break its
In SLOW, REF vehicles are not allowed to broadcast messages once their speeds are less than a threshold (e.g. 30 km/h), and the vehicles can change their pseudonyms in the situation of low driving speeds.
10347748
SLOW: A Practical pseudonym changing scheme for location privacy in VANETs
{ "venue": "2009 IEEE Vehicular Networking Conference (VNC)", "journal": "2009 IEEE Vehicular Networking Conference (VNC)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.
In REF , the authors proposed a human activity recognition technique based on a deep learning model designed for low power devices.
17900523
Deep learning for human activity recognition: A resource efficient implementation on low-power devices
{ "venue": "2016 IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN)", "journal": "2016 IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Wireless access networks are often characterized by the interaction of different end users, communication technologies, and network operators. This paper analyzes the dynamics among these "actors" by focusing on the processes of wireless network selection, where end users may choose among multiple available access networks to get connectivity, and resource allocation, where network operators may set their radio resources to provide connectivity. The interaction among end users is modeled as a noncooperative congestion game, where players (end users) selfishly select the access network that minimizes their perceived selection cost. A method based on mathematical programming is proposed to find Nash equilibria and characterize their optimality under three cost functions, which are representative of different technological scenarios. System level simulations are then used to evaluate the actual throughput and fairness of the equilibrium points. The interaction among end users and network operators is then assessed through a two-stage multileader/multifollower game, where network operators (leaders) play in the first stage by properly setting the radio resources to maximize their users, and end users (followers) play in the second stage the aforementioned network selection game. The existence of exact and approximated subgame perfect Nash equilibria of the two-stage game is thoroughly assessed and numerical results are provided on the "quality" of such equilibria.
In REF the authors propose a study to capture the dynamics among end users and network operators in the processes of network selection and resource allocation.
14037736
Network Selection and Resource Allocation Games for Wireless Access Networks
{ "venue": "IEEE Transactions on Mobile Computing", "journal": "IEEE Transactions on Mobile Computing", "mag_field_of_study": [ "Computer Science" ] }
The process of opinion formation through synthesis and contrast of different viewpoints has been the subject of many studies in economics and social sciences. Today, this process manifests itself also in online social networks and social media. The key characteristic of successful promotion campaigns is that they take into consideration such opinion-formation dynamics in order to create a overall favorable opinion about a specific information item, such as a person, a product, or an idea. In this paper, we adopt a well-established model for social-opinion dynamics and formalize the campaigndesign problem as the problem of identifying a set of target individuals whose positive opinion about an information item will maximize the overall positive opinion for the item in the social network. We call this problem Campaign. We study the complexity of the Campaign problem, and design algorithms for solving it. Our experiments on real data demonstrate the efficiency and practical utility of our algorithms.
Gionis, Terzi, and Tsaparas REF study the problem of identifying a set of target individuals whose positive opinions about an information item will maximize the overall positive opinion for the item in the social network, from an algorithmic and experimental perspective.
15450775
Opinion Maximization in Social Networks
{ "venue": "Siam International Conference on Data Mining (SDM), 2013", "journal": null, "mag_field_of_study": [ "Computer Science", "Physics" ] }
Fog computing is emerging as a powerful and popular computing paradigm to perform IoT (Internet of Things) applications, which is an extension to the cloud computing paradigm to make it possible to execute the IoT applications in the network of edge. The IoT applications could choose fog or cloud computing nodes for responding to the resource requirements, and load balancing is one of the key factors to achieve resource efficiency and avoid bottlenecks, overload, and low load. However, it is still a challenge to realize the load balance for the computing nodes in the fog environment during the execution of IoT applications. In view of this challenge, a dynamic resource allocation method, named DRAM, for load balancing in fog environment is proposed in this paper. Technically, a system framework for fog computing and the load-balance analysis for various types of computing nodes are presented first. Then, a corresponding resource allocation method in the fog environment is designed through static resource allocation and dynamic service migration to achieve the load balance for the fog computing systems. Experimental evaluation and comparison analysis are conducted to validate the efficiency and effectiveness of DRAM.
In REF , Xu X et al. proposed a dynamic resource allocation method for load balancing in the fog computing environment -DRAM, which can effectively achieve efficient deployment of tasks and load balancing of fog computing nodes, and reduce service delay.
19169452
Dynamic Resource Allocation for Load Balancing in Fog Environment
{ "venue": "Wireless Communications and Mobile Computing", "journal": "Wireless Communications and Mobile Computing", "mag_field_of_study": [ "Computer Science" ] }
Abstract-State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search [4] , one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [2], Selective Search is an order of magnitude slower, at 2 seconds per image in a CPU implementation. EdgeBoxes [6] currently provides the best tradeoff between proposal quality and speed, at 0.2 seconds per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change-computing proposals with a deep convolutional neural network-leads to an elegant and effective solution where proposal computation is nearly cost-free given the detection network's computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [1], [2] . By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10 ms per image). Our observation is that the convolutional feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) [7] and can be trained end-to-end specifically for the task for generating detection proposals. RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. In contrast to prevalent methods [1], [2], [8], [9] that use pyramids of images (Fig. 1a) or pyramids of filters (Fig. 1b) , we introduce novel "anchor" boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Fig. 1c) , which avoids enumerating images or filters of multiple scales or aspect ratios. This model performs well when trained and tested using single-scale images and thus benefits running speed. To unify RPNs with Fast R-CNN [2] object detection networks, we propose a training scheme that alternates S. Ren is with
Faster R-CNN REF advanced this pipeline by optimizing the original selective search with a Region Proposal Network (RPN).
10328909
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract. In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be "tape on and forget" devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25% to 89% depending on the evaluation criteria used 1 .
Tapia et al. REF proposed a system for recognizing activities in the home environment.
6495041
Activity Recognition in the Home Using Simple and Ubiquitous Sensors
{ "venue": "Pervasive", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Classification of network traffic using port-based or payload-based analysis is becoming increasingly difficult with many peer-to-peer (P2P) applications using dynamic port numbers, masquerading techniques, and encryption to avoid detection. An alternative approach is to classify traffic by exploiting the distinctive characteristics of applications when they communicate on a network. We pursue this latter approach and demonstrate how cluster analysis can be used to effectively identify groups of traffic that are similar using only transport layer statistics. Our work considers two unsupervised clustering algorithms, namely K-Means and DBSCAN, that have previously not been used for network traffic classification. We evaluate these two algorithms and compare them to the previously used AutoClass algorithm, using empirical Internet traces. The experimental results show that both K-Means and DBSCAN work very well and much more quickly then AutoClass. Our results indicate that although DBSCAN has lower accuracy compared to K-Means and AutoClass, DBSCAN produces better clusters.
al REF used K-mean and DBSCAN algorithms to classify the network traffic, the result shows that K-mean has faster performance while DBSCAN has better clusters.
2120232
Traffic classification using clustering algorithms
{ "venue": "MineNet '06", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We consider the convex-concave saddle point problem min x max y f (x) + y Ax − g(y) where f is smooth and convex and g is smooth and strongly convex. We prove that if the coupling matrix A has full column rank, the vanilla primaldual gradient method can achieve linear convergence even if f is not strongly convex. Our result generalizes previous work which either requires f and g to be quadratic functions or requires proximal mappings for both f and g. We adopt a novel analysis technique that in each iteration uses a "ghost" update as a reference, and show that the iterates in the primal-dual gradient method converge to this "ghost" sequence. Using the same technique we further give an analysis for the primal-dual stochastic variance reduced gradient (SVRG) method for convex-concave saddle point problems with a finite-sum structure.
Further, REF shows that GDA achieves a linear convergence rate when g is convex and h is strongly convex.
3521793
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract. Timing analysis is a key step in the design of dependable real-time embedded systems. In this paper, we present GameTime, a toolkit for execution time analysis of software. GameTime is based on a combination of game-theoretic online learning and systematic testing using satisfiability modulo theories (SMT) solvers. In contrast with many existing tools for timing analysis, GameTime can be used for a range of tasks, including estimating worst-case execution time, predicting the distribution of execution times of a task, and finding timing-related bugs in programs. We describe key implementation details of GameTime and illustrate its usage through examples.
A timing analysis based on game-theoretic learning was presented in REF .
8029850
GameTime: A toolkit for timing analysis of software
{ "venue": "in Proceedings of the 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-theart approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46%, which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at https://github.com/arcelien/pba.
Population based augmentation (PBA) REF replaces the fixed augmentation policy with a dynamic schedule of augmentation policy along with the training process, which is mostly related to our work.
153312991
Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }