src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
Abstract-Equipping wireless nodes with multiple radios can significantly increase the capacity of wireless networks, by making these radios simultaneously transmit over multiple nonoverlapping channels. However, due to the limited number of radios and available orthogonal channels, designing efficient channel assignment and scheduling algorithms in such networks is a major challenge. In this paper, we present provablygood distributed algorithms for simultaneous channel allocation of individual links and packet-scheduling, in Software-Defined Radio (SDR) wireless networks. Our distributed algorithms are very simple to implement, and do not require any coordination even among neighboring nodes. A novel access hash function or random oracle methodology is one of the key drivers of our results. With this access hash function, each radio can know the transmitters' decisions for links in its interference set for each time slot without introducing any extra communication overhead between them. Further, by utilizing the inductivescheduling technique, each radio can also backoff appropriately to avoid collisions. Extensive simulations demonstrate that our bounds are valid in practice.
Han et al. REF present provably good distributed algorithms for simultaneous channel allocation of individual links and packet-scheduling, in SoftwareDefined Radio (SDR) wireless networks.
987009
Distributed Strategies for Channel Allocation and Scheduling in Software-Defined Radio Networks
{ "venue": "IEEE INFOCOM 2009", "journal": "IEEE INFOCOM 2009", "mag_field_of_study": [ "Computer Science" ] }
Distant supervised relation extraction has been widely used to find novel relational facts from text. However, distant supervision inevitably accompanies with the wrong labelling problem, and these noisy data will substantially hurt the performance of relation extraction. To alleviate this issue, we propose a sentence-level attention-based model for relation extraction. In this model, we employ convolutional neural networks to embed the semantics of sentences. Afterwards, we build sentence-level attention over multiple instances, which is expected to dynamically reduce the weights of those noisy instances. Experimental results on real-world datasets show that, our model can make full use of all informative sentences and effectively reduce the influence of wrong labelled instances. Our model achieves significant and consistent improvements on relation extraction as compared with baselines. The source code of this paper can be obtained from https: //github.com/thunlp/NRE.
In 2016, Lin et al. REF constructed a sentence-level attention to dynamically reduce the weights of negative instances, and the P@avg value reached to 72.2%.
397533
Neural Relation Extraction with Selective Attention over Instances
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batchnormalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
Ioffe et al. observed that the change in the distributions of layers' inputs during the training of deep neural networks poses a serious problem because the layers need to adapt to the new distribution continuously REF ; this phenomenon was referred to as internal covariate shift.
5808102
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Daikon is an implementation of dynamic detection of likely invariants; that is, the Daikon invariant detector reports likely program invariants. An invariant is a property that holds at a certain point or points in a program; these are often used in assert statements, documentation, and formal specifications. Examples include being constant (x = a), non-zero (x = 0), being in a range (a ≤ x ≤ b), linear relationships (y = ax + b), ordering (x ≤ y), functions from a library (x = fn(y)), containment (x ∈ y), sortedness (x is sorted), and many more. Users can extend Daikon to check for additional invariants. Dynamic invariant detection runs a program, observes the values that the program computes, and then reports properties that were true over the observed executions. Dynamic invariant detection is a machine learning technique that can be applied to arbitrary data. Daikon can detect invariants in C, C++, Java, and Perl programs, and in record-structured data sources; it is easy to extend Daikon to other applications. Invariants can be useful in program understanding and a host of other applications. Daikon's output has been used for generating test cases, predicting incompatibilities in component integration, automating theorem-proving, repairing inconsistent data structures, and checking the validity of data streams, among other tasks. Daikon is freely available in source and binary form, along with extensive documentation, at
Daikon is a tool for inferring likely invariants in C, C++, Java or Perl programs REF .
17620776
The Daikon system for dynamic detection of likely invariants
{ "venue": "Science of Computer Programming", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-In continuous integration, a tight integration of test case prioritization techniques and fault-localization techniques may both expose failures faster and locate faults more effectively. Statistical fault-localization techniques use the execution information collected during testing to locate faults. Executing a small fraction of a prioritized test suite reduces the cost of testing, and yet the subsequent fault localization may suffer. This paper presents the first empirical study to examine the impact of test case prioritization on the effectiveness of fault localization. Among many interesting empirical results, we find that coverage-based techniques and random ordering can be more effective than distribution-based techniques in supporting statistical fault localization. Furthermore, the integration of random ordering for test case prioritization and statistical fault localization can be effective in locating faults quickly and economically.
Our previous work REF ) studied the problem of how prioritization techniques affect fault localization techniques in a continuous integration environment.
16549991
How Well Do Test Case Prioritization Techniques Support Statistical Fault Localization
{ "venue": "2009 33rd Annual IEEE International Computer Software and Applications Conference", "journal": "2009 33rd Annual IEEE International Computer Software and Applications Conference", "mag_field_of_study": [ "Computer Science" ] }
Abstract In this paper, we propose a novel rigid motion segmentation algorithm called randomized voting (RV
Recently, Jung et al. REF proposed a novel rigid motion segmentation algorithm based on the randomized voting (RV).
10772442
Rigid Motion Segmentation Using Randomized Voting
{ "venue": "2014 IEEE Conference on Computer Vision and Pattern Recognition", "journal": "2014 IEEE Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and nonlinear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We show that, replacing the expression space of an existing state-of-theart face model with our model, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.
Ranjan et al. REF introduce a convolution- al mesh autoencoder to learn nonlinear variations in shape and expression.
50790278
Generating 3D faces using Convolutional Mesh Autoencoders
{ "venue": "European Conference on Computer Vision 2018", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper studies sentiment analysis from the user-generated content on the Web. In particular, it focuses on mining opinions from comparative sentences, i.e., to determine which entities in a comparison are preferred by its author. A typical comparative sentence compares two or more entities. For example, the sentence, "the picture quality of Camera X is better than that of Camera Y", compares two entities "Camera X" and "Camera Y" with regard to their picture quality. Clearly, "Camera X" is the preferred entity. Existing research has studied the problem of extracting some key elements in a comparative sentence. However, there is still no study of mining opinions from comparative sentences, i.e., identifying preferred entities of the author. This paper studies this problem, and proposes a technique to solve the problem. Our experiments using comparative sentences from product reviews and forum posts show that the approach is effective.
REF deal with the problem of finding opinions from comparative sentences.
8985962
Mining Opinions in Comparative Sentences
{ "venue": "International Conference On Computational Linguistics", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Motivated by the representation of biometric and multimedia objects, we consider the problem of hiding noisy point-sets using a secure sketch. A point-set X consists of s points from a d-dimensional discrete domain [0, N −1] d . Under permissible noises, for every point x1, .., x d ∈ X, each xi may be perturbed by a value of at most δ. In addition, at most t points in X may be replaced by other points in [0, N − 1] d . Given an original X, we want to compute a secure sketch P . A known method constructs the sketch by adding a set of random points R, and the description of (X ∪ R) serves as part of the sketch. However, the dependencies among the random points are difficult to analyze, and there is no known non-trivial bound on the entropy loss. In this paper, we first give a general method to generate R and show that the entropy loss of (X ∪ R) is at most s(d log ∆ + d + 0.443), where ∆ = 2δ + 1. We next give improved schemes for d = 1, and special cases for d = 2. Such improvements are achieved by pre-rounding, and careful partition of the domains into cells. It is possible to make our sketch short, and avoid using randomness during construction. We also give a method in d = 1 to demonstrate that, using the size of R as the security measure would be misleading.
Secure sketch schemes for point sets in REF are motivated by the typical similarity measure used for fingerprints, where each template consists of a set of points in 2-D space, and the similarity measure does not define a metric space.
2005681
Hiding secret points amidst chaff
{ "venue": "in Eurocrypt", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract. In consideration of the ever-growing available multimedia data, annotating multimedia content automatically with feeling(s) expected to arise in users is a challenging problem. In order to solve this problem, the emerging research field of video affective analysis aims at exploiting human emotions. In this field where no dominant feature representation has emerged yet, choosing discriminative features for the effective representation of video segments is a key issue in designing video affective content analysis algorithms. Most existing affective content analysis methods either use low-level audio-visual features or generate hand-crafted higher level representations based on these low-level features. In this work, we propose to use deep learning methods, in particular convolutional neural networks (CNNs), in order to learn mid-level representations from automatically extracted low-level features. We exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the RGB space in order to build higher level audio and visual representations. We use the learned representations for the affective classification of music video clips. We choose multi-class support vector machines (SVMs) for classifying video clips into four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results on a subset of the DEAP dataset (on 76 music video clips) show that a significant improvement is obtained when higher level representations are used instead of low-level features, for video affective content analysis.
Acar et al. REF built mid-level representations from Mel-Frequency Cepstral Coefficients and colour values using convolutional neural networks, revealing an improved performance on affective classification of video clips.
13233536
Understanding Affective Content of Music Videos through Learned Representations
{ "venue": "MMM", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle system, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the "animator." The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.
Reynolds early work explores an approach to simulate bird flocking by creating a distributed behavioral model that results in artificial agent behavior much like natural flocking REF .
546350
Flocks, herds and schools: A distributed behavioral model
{ "venue": "SIGGRAPH '87", "journal": null, "mag_field_of_study": [ "Computer Science", "Geography" ] }
In many applications, local or remote sensors send in streams of data, and the system needs to monitor the streams to discover relevant events/patterns and deliver instant reaction correspondingly. An important scenario is that the incoming stream is a continually appended time series, and the patterns are time series in a database. At each time when a new value arrives (called a time position), the system needs to find, from the database, the nearest or near neighbors of the incoming time series up to the time position. This paper attacks the problem by using Fast Fourier Transform (FFT) to efficiently find the cross correlations of time series, which yields, in a batch mode, the nearest and near neighbors of the incoming time series at many time positions. To take advantage of this batch processing in achieving fast response time, this paper uses prediction methods to predict future values. FFT is used to compute the cross correlations of the predicted series (with the values that have already arrived) and the database patterns, and to obtain predicted distances between the incoming time series at many future time positions and the database patterns. When the actual data value arrives, the prediction error together with the predicted distances is used to filter out patterns that are not possible to be the nearest or near neighbors, which provides fast responses. Experiments show that with reasonable prediction errors, the performance gain is significant.
REF uses prediction to take advantage of batch processing.
946102
Continually evaluating similarity-based pattern queries on a streaming time series
{ "venue": "SIGMOD '02", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Background: Massive text mining of the biological literature holds great promise of relating disparate information and discovering new knowledge. However, disambiguation of gene symbols is a major bottleneck. We developed a simple thesaurus-based disambiguation algorithm that can operate with very little training data. The thesaurus comprises the information from five human genetic databases and MeSH. The extent of the homonym problem for human gene symbols is shown to be substantial (33% of the genes in our combined thesaurus had one or more ambiguous symbols), not only because one symbol can refer to multiple genes, but also because a gene symbol can have many non-gene meanings. A test set of 52,529 Medline abstracts, containing 690 ambiguous human gene symbols taken from OMIM, was automatically generated. Overall accuracy of the disambiguation algorithm was up to 92.7% on the test set. The ambiguity of human gene symbols is substantial, not only because one symbol may denote multiple genes but particularly because many symbols have other, non-gene meanings. The proposed disambiguation approach resolves most ambiguities in our test set with high accuracy, including the important gene/not a gene decisions. The algorithm is fast and scalable, enabling gene-symbol disambiguation in massive text mining applications.
REF achieve 92.5% accuracy on human gene symbols.
632308
Thesaurus-based disambiguation of gene symbols
{ "venue": "BMC Bioinformatics", "journal": "BMC Bioinformatics", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract. The choice of the kernel function is crucial to most applications of support vector machines. In this paper, however, we show that in the case of text classification, term-frequency transformations have a larger impact on the performance of SVM than the kernel itself. We discuss the role of importance-weights (e.g. document frequency and redundancy), which is not yet fully understood in the light of model complexity and calculation cost, and we show that time consuming lemmatization or stemming can be avoided even when classifying a highly inflectional language like German.
However, Leopold et al. REF show that, in the case of text classification, term-frequency transformations have a larger impact on the performance of the SVM than the kernel itself.
37723864
Text Categorization with Support Vector Machines. How to Represent Texts in Input Space?
{ "venue": "Machine Learning", "journal": "Machine Learning", "mag_field_of_study": [ "Computer Science" ] }
Abstract: Bridge monitoring and maintenance is an expensive yet essential task in maintaining a safe national transportation infrastructure. Traditional monitoring methods use visual inspection of bridges on a regular basis and often require inspectors to travel to the bridge of concern and determine the deterioration level of the bridge. Automation of this process may result in great monetary savings and can lead to more frequent inspection cycles. One aspect of this automation is the detection of cracks and deterioration of a bridge. This paper provides a comparison of the effectiveness of four crack-detection techniques: fast Haar transform ͑FHT͒, fast Fourier transform, Sobel, and Canny. These imaging edge-detection algorithms were implemented in MatLab and simulated using a sample of 50 concrete bridge images ͑25 with cracks and 25 without͒. The results show that the FHT was significantly more reliable than the other three edge-detection techniques in identifying cracks.
Four methods for the detection of cracks in concrete bridges are compared in REF : Sobel and Canny edge detectors, the Fourier transform, and the Haar wavelet transform.
109838398
Analysis of Edge-Detection Techniques for Crack Identification in Bridges
{ "venue": null, "journal": "Journal of Computing in Civil Engineering", "mag_field_of_study": [ "Engineering" ] }
Abstract-Sharing live multimedia content is becoming increasingly popular among mobile users. In this article, we study the problem of optimizing video quality in such a scenario using scalable video coding (SVC) and chunked video content. We consider using only standard stateless HTTP servers that do not need to perform additional processing of the video content. Our key contribution is to provide close to optimal algorithms for scheduling video chunk upload for multiple clients having different viewing delays. Given such a set of clients, the problem is to decide which chunks to upload and in which order to upload them so that the quality-delay tradeoff can be optimally balanced. We show by means of simulations that the proposed algorithms can achieve notably better performance than naive solutions in practical cases. Especially the heuristic-based greedy algorithm is a good candidate for deployment on mobile devices because it is not computationally intensive but it still delivers in most cases on-par video quality compared to the more complex local optimization algorithm. We also show that using shorter video segments and being able to predict bandwidth and video chunk properties improve the delivered video quality in certain cases.
Siekkinen et al. "provide close to optimal algorithms for scheduling video chunk upload for multiple clients having different viewing delays" REF .
15506288
Optimized Upload Strategies for Live Scalable Video Transmission from Mobile Devices
{ "venue": "IEEE Transactions on Mobile Computing", "journal": "IEEE Transactions on Mobile Computing", "mag_field_of_study": [ "Computer Science" ] }
Constructive induction is the process of changing the representation of examples by creating new attributes from existing attributes. In classication, the goal of constructive induction is to nd a representation that facilitates learning a concept description by a particular learning system. Typically, the new attributes are Boolean or arithmetic combinations of existing attributes and the learning algorithms used are decision trees or rule learners. We describe the construction of new attributes that are the Cartesian product of existing attributes. We consider the eects of this operator on a Bayesian classier an a nearest neighbor algorithm.
BSEJ REF is a method of constructing new nominal attributes using Cartesian products of existing nominal attributes.
6320149
Constructive Induction of Cartesian Product Attributes
{ "venue": "Information, Statistics and Induction in Science", "journal": null, "mag_field_of_study": [ "Mathematics" ] }
Abstract-Cooperative localization (also known as sensor network localization) using received signal strength (RSS) measurements when the source transmit powers are different and unknown is investigated. Previous studies were based on the assumption that the transmit powers of source nodes are the same and perfectly known which is not practical. In this paper, the source transmit powers are considered as nuisance parameters and estimated along with the source locations. The corresponding Cramér-Rao lower bound (CRLB) of the problem is derived. To find the maximum likelihood (ML) estimator, it is necessary to solve a nonlinear and nonconvex optimization problem, which is computationally complex. To avoid the difficulty in solving the ML estimator, we derive a novel semidefinite programming (SDP) relaxation technique by converting the ML minimization problem into a convex problem which can be solved efficiently. The algorithm requires only an estimate of the path loss exponent (PLE). We initially assume that perfect knowledge of the PLE is available, but we then examine the effect of imperfect knowledge of the PLE on the proposed SDP algorithm. The complexity analyses of the proposed algorithms are also studied in detail. Computer simulations showing the remarkable performance of the proposed SDP algorithm are presented.
Furthermore, in REF , the authors examined the effect of imperfect knowledge of the PLE on the performance of the SDP algorithm and used an iterative procedure to solve the problem when P T and PLE are simultaneously unknown.
16591113
Cooperative Received Signal Strength-Based Sensor Localization With Unknown Transmit Powers
{ "venue": "IEEE Transactions on Signal Processing", "journal": "IEEE Transactions on Signal Processing", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-People detection is a key issue for robots and intelligent systems sharing a space with people. Previous works have used cameras and 2D or 3D range finders for this task. In this paper, we present a novel people detection approach for RGB-D data. We take inspiration from the Histogram of Oriented Gradients (HOG) detector to design a robust method to detect people in dense depth data, called Histogram of Oriented Depths (HOD). HOD locally encodes the direction of depth changes and relies on an depth-informed scale-space search that leads to a 3-fold acceleration of the detection process. We then propose Combo-HOD, a RGB-D detector that probabilistically combines HOD and HOG. The experiments include a comprehensive comparison with several alternative detection approaches including visual HOG, several variants of HOD, a geometric person detector for 3D point clouds, and an Haar-based AdaBoost detector. With an equal error rate of 85% in a range up to 8m, the results demonstrate the robustness of HOD and Combo-HOD on a real-world data set collected with a Kinect sensor in a populated indoor environment.
Spinello and Arras REF proposed a new people detection algorithm called Histogram of Orientated Depths (HOD), inspired by HOG features but using depth gradients instead.
9394474
People detection in RGB-D data
{ "venue": "2011 IEEE/RSJ International Conference on Intelligent Robots and Systems", "journal": "2011 IEEE/RSJ International Conference on Intelligent Robots and Systems", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Exploring an unknown environment with multiple robots requires an efficient coordination method to minimize the total duration. A standard method to discover new areas is to assign frontiers (boundaries between unexplored and explored accessible areas) to robots. In this context, the frontier allocation method is paramount. This paper introduces a decentralized and computationally efficient frontier allocation method favoring a well balanced spatial distribution of robots in the environment. For this purpose, each robot evaluates its relative rank among the other robots in term of travel distance to each frontier. Accordingly, robots are allocated to the frontier for which it has the lowest rank. To evaluate this criteria, a wavefront propagation is computed from each frontier giving an interesting alternative to path planning from robot to frontiers. Comparisons with existing approaches in computerized simulation and on real robots demonstrated the validity and efficiency of our algorithm.
In REF , the proposed decentralized approach allocates frontier points based on a rank among teammates, in terms of travel distance to each frontier, to obtain a well balanced spatial distribution of robots in the environment.
14152319
MinPos : A Novel Frontier Allocation Algorithm for Multirobot Exploration
{ "venue": "in \"ICIRA - 5th International Conference on Intelligent Robotics and Applications - 2012", "journal": null, "mag_field_of_study": [ "Engineering", "Computer Science" ] }
We introduce a novel scheme to train binary convolutional neural networks (CNNs) -CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.
In addition, some networks as in REF use the linear combination of binary values to approximate the full-precision weights and activation values.
10533533
Towards Accurate Binary Convolutional Neural Network
{ "venue": "NIPS 2017", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. With the introduction of Java 5.0 the type system has been extended by parameterized types, type variables, type terms, and wildcards. As a result very complex types can arise. The term Vector<? extends Vector<AbstractList<Integer>>> is for example a correct type in Java 5.0. In this paper we present a type unification algorithm for Java 5.0 type terms. The algorithm unifies type terms, which are in subtype relationship. For this we define Java 5.0 type terms and its subtyping relation, formally. As Java 5.0 allows wildcards as instances of generic types, the subtyping ordering contains infinite chains. We show that the type unification is still finitary. We give a type unification algorithm, which calculates the finite set of general unifiers.
The type unification algorithm presented by REF can be used for Java type inference.
1259075
Java type unification with wildcards
{ "venue": "In Proceedings of 17th International Conference on Applications of Declarative Programming and Knowledge Management and 21st Workshop on (Constraint) Logic Programming", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-This paper addresses the task of designing a modular neural network architecture that jointly solves different tasks. As an example we use the tasks of depth estimation and semantic segmentation given a single RGB image. The main focus of this work is to analyze the cross-modality influence between depth and semantic prediction maps on their joint refinement. While most previous works solely focus on measuring improvements in accuracy, we propose a way to quantify the cross-modality influence. We show that there is a relationship between final accuracy and cross-modality influence, although not a simple linear one. Hence a larger cross-modality influence does not necessarily translate into an improved accuracy. We find that a beneficial balance between the cross-modality influences can be achieved by network architecture and conjecture that this relationship can be utilized to understand different network design choices. Towards this end we propose a Convolutional Neural Network (CNN) architecture that fuses the state of the state-of-the-art results for depth estimation and semantic labeling. By balancing the crossmodality influences between depth and semantic prediction, we achieve improved results for both tasks using the NYU-Depth v2 benchmark.
In work REF , the authors analyzed the cross-modality influences between semantic segmentation and depth prediction and then designed a network architecture to balance the cross-modality influences and achieved improved performances.
6701642
Analyzing Modular CNN Architectures for Joint Depth Prediction and Semantic Segmentation
{ "venue": "2017 IEEE International Conference on Robotics and Automation (ICRA)", "journal": "2017 IEEE International Conference on Robotics and Automation (ICRA)", "mag_field_of_study": [ "Computer Science" ] }
We describe the design of a system of compact, wireless sensor modules meant to capture expressive motion when worn at the wrists and ankles of a dancer. The sensors form a high-speed RF network geared toward real-time data acquisition from multiple devices simultaneously, enabling a small dance ensemble to become a collective interface for music control. Each sensor node includes a 6-axis inertial measurement unit (IMU) comprised of three orthogonal gyroscopes and accelerometers in order to capture local dynamics, as well as a capacitive sensor to measure close range node-to-node proximity. The nodes may also be augmented with other digital or analog sensors. This paper describes application goals, presents the prototype hardware design, introduces concepts for feature extraction and interpretation, and discusses early test results.
The Sensemble system REF is meant to capture the expressive motion of a dance ensemble.
2839409
Sensemble: A Wireless, Compact, Multi-User Sensor System for Interactive Dance
{ "venue": "NIME", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We explore the problem of budgeted machine learning, in which the learning algorithm has free access to the training examples' labels but has to pay for each attribute that is specified. This learning model is appropriate in many areas, including medical applications. We present new algorithms for choosing which attributes to purchase of which examples in the budgeted learning model based on algorithms for the multi-armed bandit problem. All of our approaches outperformed the current state of the art. Furthermore, we present a new means for selecting an example to purchase after the attribute is selected, instead of selecting an example uniformly at random, which is typically done. Our new example selection method improved performance of all the algorithms we tested, both ours and those in the literature.
REF used techniques from the multi-armed bandit problem.
11782173
Bandit-Based Algorithms for Budgeted Learning
{ "venue": "Seventh IEEE International Conference on Data Mining (ICDM 2007)", "journal": "Seventh IEEE International Conference on Data Mining (ICDM 2007)", "mag_field_of_study": [ "Computer Science" ] }
The vehicular ad hoc network (VANET) is an essential technology that enables the deployment of the intelligent transportation system (ITS), which improves the traffic safety and efficiency. For the efficient message delivery in VANETs, it is desirable to provide a reliable and stable VANET routing protocol. However, VANET routing is challenging since the VANET is fundamentally different from conventional wireless ad hoc networks; vehicles move fast, and the network topology changes rapidly, causing intermittent and dynamic link connectivity. In this paper, we propose a VANET routing protocol that works based on the real-time road vehicle density information in order to provide fast and reliable message delivery so that it can adapt to the dynamic vehicular urban environment. In the proposed mechanism, each vehicle computes the real-time traffic density of the road to which it belongs from the beacon messages sent by vehicles on the opposite lane and its road information table. Using the road traffic density information as a routing metric, each vehicle establishes a reliable route for packet delivery. We compare our proposed mechanism with the well-known GPSR via NS-2 based simulations and show that our mechanism outperforms GPSR in terms of both delivery success rate and routing overhead.
The stable routing protocol for vehicles in urban environments REF was proposed as a VANET routing protocol that considers the real-time road vehicle density information.
38011753
A Stable Routing Protocol for Vehicles in Urban Environments
{ "venue": null, "journal": "International Journal of Distributed Sensor Networks", "mag_field_of_study": [ "Computer Science" ] }
Localization is a fundamental operation in inobile and seIf-corlfguring iienuorks such as sensor networks and mobile ad hoc riemorks. For example, sensor location is ofen critical for data interpretation. Existing research focuses on localization mechaizisms: algorithms and infrastructure designed to allow the sensors to detemtine their location. In a mobile em&-oiinzerir, the underlying localization nzeclmnisni niust be invoked repeatedly to rlzaintain accirrate location irlfor7nation. We propose and investigate ndaptive and predictive protocols that corztrol the frequency of localization based on sensor mobility behavior to reduce the energy requireinerits for localization while bounding the localization error In addition, we evaluaie the energy-accuracy rradeoffs. Orrr results indicate that the proposed protocols reduce the localization energy significantly without sacrijcirzg acairacy.
Tilak et al. REF study the time interval for broadcasting of the mobile beacon and propose an adaptive and predictive protocols that control the frequency of localization based on sensor mobility behavior.
17355502
Dynamic localization control for mobile sensor networks
{ "venue": "PCCC 2005. 24th IEEE International Performance, Computing, and Communications Conference, 2005.", "journal": "PCCC 2005. 24th IEEE International Performance, Computing, and Communications Conference, 2005.", "mag_field_of_study": [ "Computer Science" ] }
Abstract-With the price of wireless sensor technologies diminishing rapidly we can expect large numbers of autonomous sensor networks being deployed in the near future. These sensor networks will typically not remain isolated but the need of interconnecting them on the network level to enable integrated data processing will arise, thus realizing the vision of a global "Sensor Internet." This requires a flexible middleware layer which abstracts from the underlying, heterogeneous sensor network technologies and supports fast and simple deployment and addition of new platforms, facilitates efficient distributed query processing and combination of sensor data, provides support for sensor mobility, and enables the dynamic adaption of the system configuration during runtime with minimal (zeroprogramming) effort. This paper describes the Global Sensor Networks (GSN) middleware which addresses these goals. We present GSN's conceptual model, abstractions, and architecture, and demonstrate the efficiency of the implementation through experiments with typical high-load application profiles. The GSN implementation is available from http://gsn.sourceforge.net/.
GSN REF is a platform aiming at providing flexible middleware to address the challenges of sensor data integration and distributed query processing.
5664736
Infrastructure for Data Processing in Large-Scale Interconnected Sensor Networks
{ "venue": "2007 International Conference on Mobile Data Management", "journal": "2007 International Conference on Mobile Data Management", "mag_field_of_study": [ "Computer Science" ] }
A low-rank transformation learning framework for subspace clustering and classification is here proposed. Many high-dimensional data, such as face images and motion sequences, approximately lie in a union of low-dimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such highdimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using nuclear norm as the modeling and optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a maximally separated structure for data from different subspaces. In this way, we reduce variations within the subspaces, and increase separation between the subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the low-rank structures of the transformed subspaces, we further introduce a fast subspace clustering technique, which efficiently combines robust PCA with sparse modeling. When class labels are present at the training stage, we show this low-rank transformation framework also significantly enhances classification performance. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification. The learned low cost transform is also applicable to other classification frameworks.
In REF , a linear transform
287318
Learning Transformations for Clustering and Classification
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-This paper presents an method of self-calibration of varying internal camera parameters that based on quasi-affine reconstruction. In a stratified approach to self-calibration, a projective reconstruction is obtained first and this is successively refined first to an affine and then to a Euclidean reconstruction. It has been observed that the difficult step is to obtain the affine reconstruction, or equivalently to locate the plane at infinity in the projective coordinate frame. So, a quasi-affine reconstruction is obtained first by image sequences, then we can obtain the infinite plane in the quasi-affine space, and equivalently to affine reconstruction. Then the infinite homography matrix can be calculated through the affine reconstruction, and then using the infinite homography matrix and constraints of the image of absolute conic to calculate the camera internal parameters matrix, and further to measure the metric rreconstruction. This method does not require a special scene constraints(such as prapllel, perpendicular) information, and also the camera movement informations(such as pure translation or orthogonal movement ), to achieve the goal of self-calibration. The theoretics analysis and experiments with real data demonstrate that this self-calibration method is available, stable and robust.
The authors of the article REF use some constraints on the image of the absolute conic.
2374284
Self-calibration of Varying Internal Camera Parameters Algorithm Based on Quasi-affine Reconstruction
{ "venue": "JCP", "journal": "JCP", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.
REF also combine the graph node attributes with the graph edge structures for community discovery.
2760873
Community Detection in Networks with Node Attributes
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Physics" ] }
Background: Cincinnati Children's Hospital Medical Center (CCHMC) has built the initial Natural Language Processing (NLP) component to extract medications with their corresponding medical conditions (Indications, Contraindications, Overdosage, and Adverse Reactions) as triples of medication-related information ([(1) drug name]-[(2) medical condition]-[(3) LOINC section header]) for an intelligent database system, in order to improve patient safety and the quality of health care. The Food and Drug Administration's (FDA) drug labels are used to demonstrate the feasibility of building the triples as an intelligent database system task. Methods: This paper discusses a hybrid NLP system, called AutoMCExtractor, to collect medical conditions (including disease/disorder and sign/symptom) from drug labels published by the FDA. Altogether, 6,611 medical conditions in a manually-annotated gold standard were used for the system evaluation. The pre-processing step extracted the plain text from XML file and detected eight related LOINC sections (e.g. Adverse Reactions, Warnings and Precautions) for medical condition extraction. Conditional Random Fields (CRF) classifiers, trained on token, linguistic, and semantic features, were then used for medical condition extraction. Lastly, dictionary-based postprocessing corrected boundary-detection errors of the CRF step. We evaluated the AutoMCExtractor on manuallyannotated FDA drug labels and report the results on both token and span levels. Results: Precision, recall, and F-measure were 0.90, 0.81, and 0.85, respectively, for the span level exact match; for the token-level evaluation, precision, recall, and F-measure were 0.92, 0.73, and 0.82, respectively. Conclusions: The results demonstrate that (1) medical conditions can be extracted from FDA drug labels with high performance; and (2) it is feasible to develop a framework for an intelligent database system.
A hybrid NLP system, AutoMCExtractor, uses conditional random fields and post-processing rules to extract medical conditions from SPLs and build triplets in the form of([drug name]-[medical condition]-[LOINC section header]) REF .
7700808
Mining FDA drug labels for medical conditions
{ "venue": "BMC Medical Informatics and Decision Making", "journal": "BMC Medical Informatics and Decision Making", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
Twitter is a micro blogging website, where users can post messages in very short text called Tweets. Tweets contain user opinion and sentiment towards an object or person. This sentiment information is very useful in various aspects for business and governments. In this paper, we present a method which performs the task of tweet sentiment identification using a corpus of pre-annotated tweets. We present a sentiment scoring function which uses prior information to classify (binary classification ) and weight various sentiment bearing words/phrases in tweets. Using this scoring function we achieve classification accuracy of 87% on Stanford Dataset and 88% on Mejaj dataset. Using supervised machine learning approach, we achieve classification accuracy of 88% on Stanford dataset.
REF ) presents a simple sentiment scoring function which uses prior information to classify and weight various sentiment bearing words/phrases in tweets.
17511753
Mining Sentiments from Tweets
{ "venue": "Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We implement a strategy for aligning two protein-protein interaction networks that combines interaction topology and protein sequence similarity to identify conserved interaction pathways and complexes. Using this approach we show that the protein-protein interaction networks of two distantly related species, Saccharomyces cerevisiae and Helicobacter pylori, harbor a large complement of evolutionarily conserved pathways, and that a large number of pathways appears to have duplicated and specialized within yeast. Analysis of these findings reveals many well characterized interaction pathways as well as many unanticipated pathways, the significance of which is reinforced by their presence in the networks of both species. E volution is driven by biological variation at many levels. Mutations and rearrangements in genomic DNA lead to changes in protein structures, abundances, and modification states. Variations at the protein level, in turn, impact how proteins interact with one another, with DNA, and with small molecules to form signaling, regulatory, and metabolic networks. Changes in network organization have sweeping implications for cellular function, tissue-level responses, and the behavior and morphology of whole organisms. Gene and protein sequences have long received the most attention as metrics for evolutionary change, both because they represent a fundamental level of biological variation and because they are readily available through automated sequencing technology. However, recent technological advances also enable us to characterize networks of protein interactions. Protein interactions are crucial to cellular function both in assembling protein complexes and in signal transduction cascades. Among the most direct and systematic methods for measuring protein interactions are coimmunoprecipitation (1) and the two-hybrid system (2), which have defined large protein-protein interaction networks for organisms including Saccharomyces cerevisiae (3-5), Helicobacter pylori (6), and Caenorhabditis elegans (7). Although the quality of data from these experiments has been mixed, pooling of multiple studies and integration with other data types such as gene expression have been used to reduce the number of false-positive interactions (8). The rapid growth of protein network information raises a host of new questions in evolutionary and comparative biology. Given that protein sequences and structures are conserved in and among species, are networks of protein interactions conserved as well? Is there some minimal set of interaction pathways required for all species? Can we measure evolutionary distance at the level of network connectivity rather than at the level of DNA or protein sequence? Mounting evidence suggests that conserved protein interaction pathways indeed exist and may be ubiquitous: For example, proteins in the same pathway are typically present or absent in a genome as a group (9), and several hundred protein-protein interactions in the yeast network have also been identified for the corresponding protein orthologs in worms (10). To explore interspecies pathway conservation on a global scale, we performed a series of whole-network comparisons using the protein-protein interaction networks of the budding yeast S. cerevisiae and the bacterial pathogen H. pylori. Comparative network analysis has proven powerful in a number of related domains including metabolic pathway analysis (11-14), motif finding (15), and correlation of biological networks with gene expression (16). Here we systematically search for and prioritize conserved interaction pathways in yeast vs. bacteria, yeast vs. yeast, and yeast vs. specific ''queries'' formulated to uncover homologous mitogen-activated protein kinase (MAPK) signaling and ubiquitin ligation machinery. Methods We developed an efficient computational procedure for aligning two protein interaction networks to identify their conserved interaction pathways. § This procedure, which we named PATHBLAST because of its conceptual similarity to sequence alignment algorithms such as BLAST (17), searches for high-scoring pathway alignments involving two paths, one from each network, in which proteins of the first path ͗A, B, C, D, . . . ͘ are paired with putative homologs occurring in the same order in the second path ͗a, b, c, d, . . . ͘ (Fig. 1a) . Evolutionary variations and experimental errors in pathway structure are accommodated by allowing ''gaps'' and ''mismatches'' (see also ref. 14). A gap occurs when a protein interaction in one path skips over a protein in the other, whereas a mismatch occurs when aligned proteins do not share sequence similarity. Because of space limitations, only abbreviated methods are given in the following sections; full descriptions are available in Supporting Materials and Methods and Figs. 5 and 6, which are published as supporting information on the PNAS web site, www.pnas.org. ¶ Global Alignment and Scoring. To perform the alignment, the two networks are combined into a global alignment graph (Fig. 1b) in which each vertex represents a pair of proteins (one from each network) having at least weak sequence similarity (BLAST E value Յ 10 ) and each edge represents a conserved interaction, gap, or mismatch. A path through this graph represents a pathway alignment between the two networks. We formulate a log probability score S(P) that decomposes over the vertices v and edges e of a path P through the global alignment graph Abbreviation: MAPK, mitogen-activated protein kinase. ‡ To whom correspondence should be sent at the present address: University of California at San Diego, Department of Bioengineering, 9500 Gilman Drive, La Jolla, CA 92093. E-mail: trey@bioeng.ucsd.edu. § The term ''pathway'' has been used broadly within various molecular biological contexts to refer to biochemical reaction chains, signal transduction cascades, gene regulatory systems, or other sequences of biomolecular events. Here a pathway refers to a sequence of protein-protein interactions forming a connected path in the network. ¶ We have also explored methods for identifying conserved subnetworks as opposed to linear paths (see Fig. 7 , which is published as supporting information on the PNAS web site); choosing which approach is most desirable remains an open problem and depends on issues of computational efficiency and whether protein complexes or sequential pathways such as signal transduction or regulatory cascades are of highest interest.
When PPI data are noisy, it can allow gaps and mismatches to handle variations REF .
14326259
Conserved pathways within bacteria and yeast as revealed by global protein network alignment
{ "venue": "Proceedings of the National Academy of Sciences of the United States of America", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "mag_field_of_study": [ "Biology", "Medicine" ] }
We propose Bilingually-constrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings (compact vector representations for phrases), which can distinguish the phrases with different semantic meanings. The BRAE is trained in a way that minimizes the semantic distance of translation equivalents and maximizes the semantic distance of nontranslation pairs simultaneously. After training, the model learns how to embed each phrase semantically in two languages and also learns how to transform semantic embedding space in one language to the other. We evaluate our proposed method on two end-to-end SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to measure semantic similarity between a source phrase and its translation candidates. Extensive experiments show that the BRAE is remarkably effective in these two tasks.
Bilingually-constrained phrase embeddings were developed in REF .
18380505
Bilingually-constrained Phrase Embeddings for Machine Translation
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search [4] , one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [2], Selective Search is an order of magnitude slower, at 2 seconds per image in a CPU implementation. EdgeBoxes [6] currently provides the best tradeoff between proposal quality and speed, at 0.2 seconds per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change-computing proposals with a deep convolutional neural network-leads to an elegant and effective solution where proposal computation is nearly cost-free given the detection network's computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [1], [2] . By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10 ms per image). Our observation is that the convolutional feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) [7] and can be trained end-to-end specifically for the task for generating detection proposals. RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. In contrast to prevalent methods [1], [2], [8], [9] that use pyramids of images (Fig. 1a) or pyramids of filters (Fig. 1b) , we introduce novel "anchor" boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Fig. 1c) , which avoids enumerating images or filters of multiple scales or aspect ratios. This model performs well when trained and tested using single-scale images and thus benefits running speed. To unify RPNs with Fast R-CNN [2] object detection networks, we propose a training scheme that alternates S. Ren is with
Further improvements led to state-of-the-art CNNs of better performance with the introduction of region proposals creating, among others, for example the faster region based convolutional neural network (Faster R-CNN; REF ).
10328909
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Performance prediction across platforms is increasingly important as developers can choose from a wide range of execution platforms. The main challenge remains to perform accurate predictions at a low-cost across different architectures. In this paper, we derive an affordable method approaching cross-platform performance translation based on relative performance between two platforms. We argue that relative performance can be observed without running a parallel application in full. We show that it suffices to observe very short partial executions of an application since most parallel codes are iterative and behave predictably manner after a minimal startup period. This novel prediction approach is observation-based. It does not require program modeling, code analysis, or architectural simulation. Our performance results using real platforms and production codes demonstrate that prediction derived from partial executions can yield high accuracy at a low cost. We also assess the limitations of our model and identify future research directions on observationbased performance prediction.
Yang et al. REF profiled partial execution of an application on different platforms to infer relative full-application performance.
6316860
Cross-Platform Performance Prediction of Parallel Applications Using Partial Execution
{ "venue": "ACM/IEEE SC 2005 Conference (SC'05)", "journal": "ACM/IEEE SC 2005 Conference (SC'05)", "mag_field_of_study": [ "Computer Science" ] }
We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVO: one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVO by using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1 . The experiments on KITTI dataset show our UnDeepVO achieves good performance in terms of pose accuracy.
In REF authors propose a visual odometry (VO) system based on unsupervised deep neural networks trained over stereo images and tested with monocular images, that is able to estimate the 6-DoF pose of a monocular camera.
206853077
UnDeepVO: Monocular Visual Odometry Through Unsupervised Deep Learning
{ "venue": "2018 IEEE International Conference on Robotics and Automation (ICRA)", "journal": "2018 IEEE International Conference on Robotics and Automation (ICRA)", "mag_field_of_study": [ "Engineering", "Computer Science" ] }
Abstract. Support Vector Machines (SVM) have been extensively studied and have shown remarkable success in many applications. However the success of SVM is very limited when it is applied to the problem of learning from imbalanced datasets in which negative instances heavily outnumber the positive instances (e.g. in gene profiling and detecting credit card fraud). This paper discusses the factors behind this failure and explains why the common strategy of undersampling the training data may not be the best choice for SVM. We then propose an algorithm for overcoming these problems which is based on a variant of the SMOTE algorithm by Chawla et al, combined with Veropoulos et al's different error costs algorithm. We compare the performance of our algorithm against these two algorithms, along with undersampling and regular SVM and show that our algorithm outperforms all of them.
Akbani et al. REF proposed a variant of the SMOTE algorithm combined with Veropoulos et al's different error costs algorithm, using support vector machines as the learning method.
9203634
Applying support vector machines to imbalanced datasets
{ "venue": "In Proceedings of the 15th European Conference on Machine Learning (ECML", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Data centers rely on virtualization to provide different services over a shared infrastructure. The placement of the different services and tasks in the physical machines is crucial for the performance of the whole system. A misplaced service can overload some network links, lead to congestion, or even connection disruptions. On the other hand, virtual machine migration allows reallocating services and changing the traffic matrix, leading to more efficient use of bandwidth. In this paper, we propose a Virtual Machine Placement (VMP) algorithm to (re)allocate virtual machines in the data center servers, based on the current traffic matrix, CPU, and memory usage. Analyzing the formation of community patterns in terms of traffic using graph theory, we are able to find virtual machines that are correlated because they exchange high amount of data. Those virtual machines are aggregated and allocated to servers as close as possible to each other, reducing traffic congestion. Our simulation results show that VMP was able to improve the traffic distribution. In some specific cases we were able to reduce 80% of the core traffic, concentrating it at the edge of the network.
REF proposes an online virtual machine placement scheme based on re-allocation to improve the traffic distribution.
562556
Online traffic-aware virtual machine placement in data center networks
{ "venue": "2012 Global Information Infrastructure and Networking Symposium (GIIS)", "journal": "2012 Global Information Infrastructure and Networking Symposium (GIIS)", "mag_field_of_study": [ "Computer Science" ] }
Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateof-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.
RefineNet REF makes use of the Laplacian image pyramid to explicitly capture the information available along the down-sampling process and output predictions from coarse to fine.
5696978
RefineNet: Multi-path Refinement Networks for High-Resolution Semantic Segmentation
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Topology control in a sensor network balances load on sensor nodes and increases network scalability and lifetime. Clustering sensor nodes is an effective topology control approach. In this paper, we propose a novel distributed clustering approach for long-lived ad hoc sensor networks. Our proposed approach does not make any assumptions about the presence of infrastructure or about node capabilities, other than the availability of multiple power levels in sensor nodes. We present a protocol, HEED (Hybrid Energy-Efficient Distributed clustering), that periodically selects cluster heads according to a hybrid of the node residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED terminates in Oð1Þ iterations, incurs low message overhead, and achieves fairly uniform cluster head distribution across the network. We prove that, with appropriate bounds on node density and intracluster and intercluster transmission ranges, HEED can asymptotically almost surely guarantee connectivity of clustered networks. Simulation results demonstrate that our proposed approach is effective in prolonging the network lifetime and supporting scalable data aggregation.
The hybrid energy-efficient distributed clustering (HEED) algorithm REF uses density according to the number of nodes for energy-efficient communication.
2012679
HEED: a hybrid, energy-efficient, distributed clustering approach for ad hoc sensor networks
{ "venue": "IEEE Transactions on Mobile Computing", "journal": "IEEE Transactions on Mobile Computing", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Debugging often involves 1) finding the point of failure (the first statement that produces bad output) and 2) finding and fixing the actual bug. Print statements and debugger break points can help with step 1. Slicing the program back from values used at the point of failure can help with step 2. However, neither approach is ideal: Debuggers and print statements can be clumsy and timeconsuming and backward slices can be almost as large as the original program. This paper addresses both problems. We present callstack-sensitive slicing, which reduces slice sizes by leveraging the series of calls active when a program fails. We also show how slice intersections may further reduce slice sizes. We then describe a set of tools that identifies points of failure for programs that produce bad output. Finally, we apply our point-of-failure tools to a suite of buggy programs and evaluate callstack-sensitive slicing and slice intersection as applied to debugging. Callstack-sensitive slicing is very effective: On average, a callstack-sensitive slice is about 0.31 time the size of the corresponding full slice, down to just 0.06 time in the best case. Slice intersection is less impressive, on average, but may sometimes prove useful in practice. Index Terms-Static program slicing, callstack-sensitive analysis, points of failure, output tracing and attribution.
A recent approach based on dynamic 3 see "the Daikon Invariant Detector", http://groups.csail.mit.edu/pag/daikon/ slicing is proposed in REF , which through a callstack-sensitive slicing and slices intersection, reduces the slice sizes by leveraging the series of calls active when a program fails.
14742394
Better Debugging via Output Tracing and Callstack-Sensitive Slicing
{ "venue": "IEEE Transactions on Software Engineering", "journal": "IEEE Transactions on Software Engineering", "mag_field_of_study": [ "Computer Science" ] }
Understanding human mobility is crucial for a broad range of applications from disease prediction to communication networks. Most efforts on studying human mobility have so far used private and low resolution data, such as call data records. Here, we propose Twitter as a proxy for human mobility, as it relies on publicly available data and provides high resolution positioning when users opt to geotag their tweets with their current location. We analyse a Twitter dataset with more than six million geotagged tweets posted in Australia, and we demonstrate that Twitter can be a reliable source for studying human mobility patterns. Our analysis shows that geotagged tweets can capture rich features of human mobility, such as the diversity of movement orbits among individuals and of movements within and between cities. We also find that short-and long-distance movers both spend most of their time in large metropolitan areas, in contrast with intermediate-distance movers' movements, reflecting the impact of different modes of travel. Our study provides solid evidence that Twitter can indeed be a useful proxy for tracking and predicting human movement.
Analogous behavior was later exposed in Twitter REF , enabling the use of this social media platform as a proxy for tracking and predicting human movement.
215186578
Understanding Human Mobility from Twitter
null
Convolutional neural network (CNN) has drawn increasing interest in visual tracking owing to its powerfulness in feature extraction. Most existing CNN-based trackers treat tracking as a classification problem. However, these trackers are sensitive to similar distractors because their CNN models mainly focus on inter-class classification. To address this problem, we use self-structure information of object to distinguish it from distractors. Specifically, we utilize recurrent neural network (RNN) to model object structure, and incorporate it into CNN to improve its robustness to similar distractors. Considering that convolutional layers in different levels characterize the object from different perspectives, we use multiple RNNs to model object structure in different levels respectively. Extensive experiments on three benchmarks, OT-B100, TC-128 and VOT2015, show that the proposed algorithm outperforms other methods. Code is released at
SANet REF incorporates recurrent neural network into CNNs to model the structure information within an object in addition to the traditional semantic information.
6583681
SANet: Structure-Aware Network for Visual Tracking
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7%) and SUN397 (72.0%). We release the code and models at https://github.com/wanglimin/MRCNN-Scene-Recognition.
This problem can be alleviated by adopting a multi-resolution CNN architecture REF , which consists of coarse resolution CNNs and fine resolution CNNs.
7459313
Knowledge Guided Disambiguation for Large-Scale Scene Classification with Multi-Resolution CNNs
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
There exists a huge market for application specific processors used within embedded systems. This market is driven by consumer requirements (e.g., new products, more functionality and flexibility, product digitalization, better performance-cost ratio, portability) and processor design capabilities (i.e., what can we offer). Being successful within this market requires a short time-to-market; this necessitates usage of automated design tools. These tools should not only assist in the quick generation of a specific processor, but also enable the designer to investigate quickly, and quantitatively, a large set of alternative solutions. Therefore, these tools should be based on a flexible and programmable processor template. In this paper we propose the usage of Transport Triggered Architectures (TTAs) for such a processor template. TTAs can be compared to VLIWs; both can exploit the compile-time available instruction level parallelism. However, TTAs are programmed differently. TTAs combine a set of interesting features; apart from being fully programmable, they have favorable scaling characteristics, they easily incorporate arbitrary functionality, and their organization is well structured, allowing simple and automatic design. The paper explains these features. Based on this template a set of design tools has been developed; they include a parallelizing C/C++ compiler which exploits the available processor and application concurrency, a processor generator, simulators, profilers, and a tool for architecture exploration; these tools are integrated within a graphical user interface. Within the paper we shortly describe these tools and demonstrate how they can be applied to a particular application. This example application is taken from the image processing area. It will be shown how the tools assist in exploring many solutions, including those which incorporate application specific functionality.
The Transport-Triggered Architectures (TTAs) REF are similar to VLIWs in that there are a large number of parallel computations specified in each instruction.
14198926
Using Transport Triggered Architectures for Embedded Processor Design
{ "venue": "Integrated Computer-Aided Engineering", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Business processes that span organizational borders describe the interaction between multiple parties working towards a common objective. They also express business rules that govern the behavior of the process and account for expressing changes reflecting new business objectives and new market situations. In our previous work we developed a service request language and support framework that allow users to formulate their requests against standard business processes. In this paper we extend this approach by presenting a framework capable of automatically associating business rules with relevant processes involved in a user request. This framework plans and monitors the execution of the request against services underlying these processes. Definitions and classifications of business rules (named assertions in the paper) are given together with an assertion language for expressing them. The framework is able to handle the non-determinism typical for service-oriented computing environments and it is based on the interleaving of planning and execution.
REF introduce an assertion language for expressing business rules and a framework to plan and monitor the execution of these rules.
316380
Associating assertions with business processes and monitoring their execution
{ "venue": "ICSOC '04", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Virtual Machine (VM) environments (e.g., VMware and Xen) are experiencing a resurgence of interest for diverse uses including server consolidation and shared hosting. An application's performance in a virtual machine environment can differ markedly from its performance in a nonvirtualized environment because of interactions with the underlying virtual machine monitor and other virtual machines. However, few tools are currently available to help debug performance problems in virtual machine environments. In this paper, we present Xenoprof, a system-wide statistical profiling toolkit implemented for the Xen virtual machine environment. The toolkit enables coordinated profiling of multiple VMs in a system to obtain the distribution of hardware events such as clock cycles and cache and TLB misses. We use our toolkit to analyze performance overheads incurred by networking applications running in Xen VMs. We focus on networking applications since virtualizing network I/O devices is relatively expensive. Our experimental results quantify Xen's performance overheads for network I/O device virtualization in uni-and multi-processor systems. Our results identify the main sources of this overhead which should be the focus of Xen optimization efforts. We also show how our profiling toolkit was used to uncover and resolve performance bugs that we encountered in our experiments which caused unexpected application behavior.
In the similar lines of Oprofile for Linux, Menon REF had used Xenoprof, a system-wide statistical profiling toolkit for Xen, to evaluate the performance overhead of network I/O devices.
15691128
Diagnosing performance overheads in the xen virtual machine environment
{ "venue": "VEE '05", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Intel Software Guard Extension (SGX) is a hardware-based trusted execution environment (TEE) that enables secure computation without trusting any underlying software, such as operating system or even hardware firmware. It provides strong security guarantees, namely, confidentiality and integrity, to an enclave (i.e., a program running on Intel SGX) through solid hardware-based isolation. However, a new controlled-channel attack (Xu et al., Oakland 2015), although it is an out-of-scope attack according to Intel SGX's threat model, demonstrated that a malicious OS can infer coarse-grained control flows of an enclave via a series of page faults, and such a side-channel can be severe for security-sensitive applications. In this paper, we explore a new, yet critical, side-channel attack against Intel SGX, called a branch shadowing attack, which can reveal fine-grained control flows (i.e., each branch) of an enclave program running on real SGX hardware. The root cause of this attack is that Intel SGX does not clear the branch history when switching from enclave mode to non-enclave mode, leaving the fine-grained traces to the outside world through a branch-prediction side channel. However, exploiting the channel is not so straightforward in practice because 1) measuring branch prediction/misprediction penalties based on timing is too inaccurate to distinguish fine-grained control-flow changes and 2) it requires sophisticated control over the enclave execution to force its execution to the interesting code blocks. To overcome these challenges, we developed two novel exploitation techniques: 1) Intel PT-and LBR-based history-inferring techniques and 2) APIC-based technique to control the execution of enclave programs in a fine-grained manner. As a result, we could demonstrate our attack by breaking recent security constructs, including ORAM schemes, Sanctum, SGX-Shield, and T-SGX. Not limiting our work to the attack itself, we thoroughly studied the feasibility of hardware-based solutions (e.g., branch history clearing) and also proposed a software-based countermeasure, called Zigzagger, to mitigate the branch shadowing attack in practice. arXiv:1611.06952v2 [cs.CR] 25 Nov 2016 1 /* bignum.c */ 2 static int mpi_montmul(mbedtls_mpi *A, const mbedtls_mpi *B, 3 const mbedtls_mpi *N, mbedtls_mpi_uint mm, 4 const mbedtls_mpi *T) { 5 size_t i, n, m; 6 mbedtls_mpi_uint u0, u1, *d; 7 8 d = T->p; n = N->n; m = (B->n < n) ? B->n : n; 9 10 for (i = 0; i < n; i++) { 11 u0 = A->p[i]; 12 u1 = (d[0] + u0 * B->p[0]) * mm; 13 14 mpi_mul_hlp(m, B->p, d, u0); 15 mpi_mul_hlp(n, N->p, d, u1); 16 17 *d++ = u0; d[n+1] = 0; 18 } 19 20 ⋆ if (mbedtls_mpi_cmp_abs(A, N) >= 0) { 21 ⋆ mpi_sub_hlp(n, N->p, A->p); 22 ⋆ i = 1; 23 ⋆ } 24 ⋆ else { // dummy subtraction to prevent timing attacks 25 ⋆ mpi_sub_hlp(n, N->p, T->p); 26 ⋆ i = 0; 27 ⋆ } 28 return 0; 29 } Figure 8: Montgomery multiplication (mpi_montmul()) of mbed TLS. The branch shadowing attack can infer whether a dummy subtraction has performed or not.
Lee et al. demonstrate an attack using the branch history REF .
310483
Inferring Fine-grained Control Flow Inside SGX Enclaves with Branch Shadowing
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper discusses a routing protocol that uses multi-agents to reduce network congestion for a Mobile Ad hoc NETwork (MANET). MANET is a multihop wireless network in which the network components such as PC, PDA and mobile phones are mobile. The components can communicate with each other without going through a server. Two kinds of agents are engaged in routing. One is a Routing Agent that collects information about network congestion as well as link failure. The other is a Message Agent that uses this information to get to their destination nodes. MAs correspond to data packets and determine their direction autonomously using an evaluation function. We developed both a simulation environment and protocols, and performed simulations under different conditions of mobility and traffic patterns to demonstrate the effectiveness of our approach.
Kazuya Nishimura et al REF proposed a routing protocol that reduces network congestion for MANET using multi-agents.
11944837
A Multi-Agent Routing Protocol With Congestion Control For MANET
{ "venue": "ECMS 2007 Proceedings edited by: I. Zelinka, Z. Oplatkova, A. Orsoni", "journal": "ECMS 2007 Proceedings edited by: I. Zelinka, Z. Oplatkova, A. Orsoni", "mag_field_of_study": [ "Computer Science" ] }
This paper proposes an Agile Aggregating Multi-Level feaTure framework (Agile Amulet) for salient object detection. The Agile Amulet builds on previous works to predict saliency maps using multi-level convolutional features. Compared to previous works, Agile Amulet employs some key innovations to improve training and testing speed while also increase prediction accuracy. More specifically, we first introduce a contextual attention module that can rapidly highlight most salient objects or regions with contextual pyramids. Thus, it effectively guides the low-layer convolutional feature learning and tells the backbone network where to look. The contextual attention module is a fully convolutional mechanism that simultaneously learns complementary features and predicts saliency scores at each pixel. In addition, we propose a novel method to aggregate multi-level deep convolutional features. As a result, we are able to use the integrated side-output features of pre-trained convolutional networks alone, which significantly reduces the model parameters leading to a model size of 67 MB, about half of Amulet. Compared to other deep learning based saliency methods, Agile Amulet is of much lighter-weight, runs faster (30 fps in real-time) and achieves higher performance on seven public benchmarks in terms of both quantitative and qualitative evaluation.
Zhang et al. REF proposed combining multi-level convolutional features and contextual attention module to generate saliency map.
3444165
Agile Amulet: Real-Time Salient Object Detection with Contextual Attention
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract. In this paper we introduce a new simple strategy into edge-searching of a graph, which is useful to the various subgraph listing problems. Applying the strategy, we obtain the following four algorithms. The first one lists all the triangles in a graph G in O(a(G) All the algorithms require linear space. We also establish an upper bound on a(G) for a graph G: occurs. We will show in the succeeding section that the procedure above requires O(a(G)m) time. Throughout this paper m is the number of edges of a graph G, n is the number of vertices of G, and a(G) is the arboricity of G, that is, the minimum number of edge-disjoint spanning forests into which G can be decomposed [5] . We use the rather unfamiliar graph invariant a(G) as a parameter in bounding the running time of algorithms. The strategy yields simple algorithms for the problems to list certain kinds of subgraphs of a graph. The kinds of these subgraphs include "triangle, quadrangle," "complete subgraph of a fixed order," and "clique." Our algorithms are as fast as the known ones if any, and a factor n is often reduced to a(G) in the time complexity. In 2 we give an upper bound on a(G) for a general graph G: a(G) <-
Chiba and Nishizeki REF use a ranking algorithm that counts the total number of 4-cycles in a graph in O αm time, where α is the arboricity of the graph.
207051803
Arboricity and subgraph listing algorithms
{ "venue": "SIAM J. Comput.", "journal": "SIAM J. Comput.", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-Wireless sensor networks are evolving from dedicated application-specific platforms to integrated infrastructure shared by multiple applications. Shared sensor networks offer inherent advantages in terms of flexibility and cost since they allow dynamic resource sharing and allocation among multiple applications. Such shared systems face the critical need for allocation of nodes to contending applications to enhance the overall Quality of Monitoring (QoM) under resource constraints. To address this need, this paper presents Utility-based Multiapplication Allocation and Deployment Environment (UMADE), an integrated application deployment system for shared sensor networks. In sharp contrast to traditional approaches that allocate applications based on cyber metrics (e.g., computing resource utilization), UMADE adopts a cyber-physical system approach that dynamically allocates nodes to applications based on their QoM of the physical phenomena. The key novelty of UMADE is that it is designed to deal with the inter-node QoM dependencies typical in cyber-physical applications. Furthermore, UMADE provides an integrated system solution that supports the end-to-end process of (1) QoM specification for applications, (2) QoM-aware application allocation, (3) application deployment over multi-hop wireless networks, and (4) adaptive reallocation of applications in response to network dynamics. UMADE has been implemented on TinyOS and Agilla virtual machine for Telos motes. The feasibility and efficacy of UMADE have been demonstrated on a 28-node wireless sensor network testbed in the context of building automation applications.
UMADE REF is an integrated system for allocating and deploying applications in shared sensor networks based on the concept of Quality of Monitoring (QoM).
1220396
Multi-Application Deployment in Shared Sensor Networks Based on Quality of Monitoring
{ "venue": "2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium", "journal": "2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium", "mag_field_of_study": [ "Computer Science" ] }
Abstract-We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its "objectness" -a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1] . We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.
Another segmentation-based approach based on shape analysis using compactness, symmetry, smoothness, and local and global convexity of segments and their recurrence is proposed in REF .
1758445
Object discovery in 3D scenes via shape analysis
{ "venue": "2013 IEEE International Conference on Robotics and Automation", "journal": "2013 IEEE International Conference on Robotics and Automation", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-The sequential pattern in the human movement is one of the most important aspects for location recommendations in geosocial networks. Existing location recommenders have to access users' raw check-in data to mine their sequential patterns that raises serious location privacy breaches. In this paper, we propose a new Privacy-preserving LOcation REcommendation framework (PLORE) to address this privacy challenge. First, we employ the nth-order additive Markov chain to exploit users' sequential patterns for location recommendations. Further, we contrive the probabilistic differential privacy mechanism to reach a good trade-off between high recommendation accuracy and strict location privacy protection. Finally, we conduct extensive experiments to evaluate the performance of PLORE using three large-scale real-world data sets. Extensive experimental results show that PLORE provides efficient and highly accurate location recommendations, and guarantees strict privacy protection for user check-in data in geosocial networks.
Zhang et al. REF proposed a new probabilistic differential Privacy-preserving location recommendation framework.
53548394
Enabling Probabilistic Differential Privacy Protection for Location Recommendations
{ "venue": null, "journal": "IEEE Transactions on Services Computing", "mag_field_of_study": [ "Computer Science" ] }
Current statistical parsers tend to perform well only on their training domain and nearby genres. While strong performance on a few related domains is sufficient for many situations, it is advantageous for parsers to be able to generalize to a wide variety of domains. When parsing document collections involving heterogeneous domains (e.g. the web), the optimal parsing model for each document is typically not obvious. We study this problem as a new task -multiple source parser adaptation. Our system trains on corpora from many different domains. It learns not only statistics of those domains but quantitative measures of domain differences and how those differences affect parsing accuracy. Given a specific target text, the resulting system proposes linear combinations of parsing models trained on the source corpora. Tested across six domains, our system outperforms all non-oracle baselines including the best domain-independent parsing model. Thus, we are able to demonstrate the value of customizing parsing models to specific domains.
An alternative formulation of domain adaptation trains on different corpora from many different domains, then uses linear combinations of models trained on the different corpora REF .
10585087
Automatic Domain Adaptation for Parsing
{ "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.
Mnih REF trained task-specific policies using reinforcement learning methods.
17195923
Recurrent Models of Visual Attention
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker's personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.
Mairesse et al. REF proposed classification, regression and ranking models to learn the Big Five personality traits of a speaker.
6030672
Using linguistic cues for the automatic recognition of personality in conversation and text
{ "venue": "Journal of Artificial Intelligence Research (JAIR", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., "subtle nuances") and a negative semantic orientation when it has bad associations (e.g., "very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word "excellent" minus the mutual information between the given phrase and the word "poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.
In this line, REF proposed an unsupervised learning algorithm to calculate the semantic orientation (SO) of a word.
484335
Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews
{ "venue": "Annual Meeting Of The Association For Computational Linguistics", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract. A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.
Cai et al. REF proposed a Multi-Scale CNN (MS-CNN) which is a unified model for detection at different intermediate network layers.
9232270
A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Background: Due to the nature of scientific methodology, research articles are rich in speculative and tentative statements, also known as hedges. We explore a linguistically motivated approach to the problem of recognizing such language in biomedical research articles. Our approach draws on prior linguistic work as well as existing lexical resources to create a dictionary of hedging cues and extends it by introducing syntactic patterns. Furthermore, recognizing that hedging cues differ in speculative strength, we assign them weights in two ways: automatically using the information gain (IG) measure and semi-automatically based on their types and centrality to hedging. Weights of hedging cues are used to determine the speculative strength of sentences. Results: We test our system on two publicly available hedging datasets. On the fruit-fly dataset, we achieve a precision-recall breakeven point (BEP) of 0.85 using the semi-automatic weighting scheme and a lower BEP of 0.80 with the information gain weighting scheme. These results are competitive with the previously reported best results (BEP of 0.85). On the BMC dataset, using semi-automatic weighting yields a BEP of 0.82, a statistically significant improvement (p <0.01) over the previously reported best result (BEP of 0.76), while information gain weighting yields a BEP of 0.70. Our results demonstrate that speculative language can be recognized successfully with a linguistically motivated approach and confirms that selection of hedging devices affects the speculative strength of the sentence, which can be captured reasonably by weighting the hedging cues. The improvement obtained on the BMC dataset with a semi-automatic weighting scheme indicates that our linguistically oriented approach is more portable than the machine-learning based approaches. Lower performance obtained with the information gain weighting scheme suggests that this method may benefit from a larger, manually annotated corpus for automatically inducing the weights.
Kilicoglu and Bergler REF apply a linguistically motivated approach to the same classification task by using knowledge from existing lexical resources and incorporating syntactic patterns.
8898609
Recognizing speculative language in biomedical research articles: a linguistically motivated perspective
{ "venue": "BMC Bioinformatics", "journal": "BMC Bioinformatics", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
In this paper we address a seemingly simple question: Is there a universal packet scheduling algorithm? More precisely, we analyze (both theoretically and empirically) whether there is a single packet scheduling algorithm that, at a network-wide level, can match the results of any given scheduling algorithm. We find that in general the answer is "no". However, we show theoretically that the classical Least Slack Time First (LSTF) scheduling algorithm comes closest to being universal and demonstrate empirically that LSTF can closely, though not perfectly, replay a wide range of scheduling algorithms in realistic network settings. We then evaluate whether LSTF can be used in practice to meet various network-wide objectives by looking at three popular performance metrics (mean FCT, tail packet delays, and fairness); we find that LSTF performs comparable to the state-of-the-art for each of them.
UPS REF shares our goal of flexible packet scheduling by seeking a single scheduling algorithm that is universal and can emulate any scheduling algorithm.
1307979
Universal Packet Scheduling
{ "venue": "HotNets-XIV", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Communication overhead is one of the dominant factors affecting performance in high-end computing systems. To reduce the negative impact of communication, programmers overlap communication and computation by using asynchronous communication primitives. This increases code complexity, requiring more development effort and making less readable programs. This paper presents the hybrid use of MPI and SMPSs (SMP superscalar, a task-based shared-memory programming model), allowing the programmer to easily introduce the asynchrony necessary to overlap communication and computation. We also describe implementation issues in the SMPSs run time that support its efficient interoperation with MPI. We demonstrate the hybrid use of MPI/SMPSs with four application kernels (matrix multiply, Jacobi, conjugate gradient and NAS BT) and with the high-performance LINPACK benchmark. For the application kernels, the hybrid MPI/SMPSs versions significantly improve the performance of the pure MPI counterparts. For LINPACK we get close to the asymptotic performance at relatively small problem sizes and still get significant benefits at large problem sizes. In addition, the hybrid MPI/SMPSs approach substantially reduces code complexity and is less sensitive to network bandwidth and operating system noise than the pure MPI versions.
It is possible to use the hybrid MPI/SMPSs approach to support clusters with multicore CPUs REF .
18831354
Overlapping communication and computation by using a hybrid MPI/SMPSs approach
{ "venue": "ICS '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.
Their subsequent model achieved good results on CLEVR by combining a recurrent program generator and an attentive execution engine REF .
31319559
Inferring and Executing Programs for Visual Reasoning
{ "venue": "2017 IEEE International Conference on Computer Vision (ICCV)", "journal": "2017 IEEE International Conference on Computer Vision (ICCV)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
Denoising Convolutional Neural Network (DnCNN) REF is currently one of the most used and well performing supervised denoisers.
996788
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Existing person re-identification (re-id) methods rely mostly on either localised or global feature representation alone. This ignores their joint benefit and mutual complementary effects. In this work, we show the advantages of jointly learning local and global features in a Convolutional Neural Network (CNN) by aiming to discover correlated local and global features in different context. Specifically, we formulate a method for joint learning of local and global feature selection losses designed to optimise person re-id when using only generic matching metrics such as the L2 distance. We design a novel CNN architecture for Jointly Learning Multi-Loss (JLML) of local and global discriminative feature optimisation subject concurrently to the same re-id labelled information. Extensive comparative evaluations demonstrate the advantages of this new JLML model for person re-id over a wide range of state-of-the-art re-id methods on five benchmarks (VIPeR, GRID, CUHK01, CUHK03, Market-1501).
Li et al. REF proposed a multi-loss model combining metric learning and global classification to discover both local and global features.
3489845
Person Re-Identification by Deep Joint Learning of Multi-Loss Classification
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Software architecture documentation helps people in understanding the software architecture of a system. In practice, software architectures are often documented after the fact, i.e. they are maintained or created after most of the design decisions have been made and implemented. To keep the architecture documentation up-to-date an architect needs to recover and describe these decisions. This paper presents ADDRA, an approach an architect can use for recovering architectural design decisions after the fact. ADDRA uses architectural deltas to provide the architect with clues about these design decisions. This allows the architect to systematically recover and document relevant architectural design decisions. The recovered architectural design decisions improve the documentation of the architecture, which increases traceability, communication, and general understanding of a system.
ADDRA REF was designed to recover architectural design decisions in an after the fact documentation effort.
14658433
Documenting after the fact: Recovering architectural design decisions
{ "venue": "J. Syst. Softw.", "journal": "J. Syst. Softw.", "mag_field_of_study": [ "Computer Science" ] }
This paper presents a simple yet general framework for employing deep architectures to solve the inverse reinforcement learning (IRL) problem. In particular, we propose to exploit the representational capacity and favourable computational complexity of deep networks to approximate complex, nonlinear reward functions in scenarios with large state spaces. This leads to a framework with the ability to make reward predictions in constant time rather than scaling cubically in the number of state rewards observed. Furthermore, we show that the maximum entropy paradigm for IRL lends itself naturally to the efficient training of deep architectures. The approach presented outperforms the state-of-the-art on a new benchmark with a complex underlying reward structure representing strong interactions between features while exhibiting performance commensurate to state-of-the-art methods on a number of established benchmarks of comparatively low complexity.
REF uses a multi-layer neural network to represent nonlinear reward functions.
16727822
Deep Inverse Reinforcement Learning.
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Recursively-constructed couplings have been used in the past for mixing on trees. We show how to extend this technique to non-tree-like graphs such as lattices. Using this method, we obtain the following general result. Suppose that G is a triangle-free graph and that for some Δ ≥ 3, the maximum degree of G is at most Δ. We show that the spin system consisting of q-colourings of G has strong spatial mixing, provided q > αΔ − γ, where α ≈ 1.76322 is the solution to α α = e, and γ = ≈ 0.47031. Note that we have no additional lower bound on q or Δ. This is important for us because our main objective is to have results which are applicable to the lattices studied in statistical physics such as the integer lattice Z d and the triangular lattice. For these graphs (in fact, for any graph in which the distance-k neighbourhood of a vertex grows sub-exponentially in k), strong spatial mixing implies that there is a unique infinite-volume Gibbs measure. That is, there is one macroscopic equilibrium rather than many. Our general result gives, for example, a "hand proof" of strong spatial mixing for 7-colourings of triangle-free 4-regular graphs. (Computer-assisted proofs of this result were provided by Salas and Sokal (for the rectangular lattice) and by Bubley, Dyer, Greenhill and Jerrum.) It also gives a hand proof of strong spatial mixing for 5-colourings of triangle-free 3-regular graphs. (A computer-assisted proof for the special case of the hexagonal lattice was provided earlier by Salas and Sokal.) Towards the end of the paper we show how to improve our general technique by considering the geometry of the lattice. The idea is to construct the recursive coupling from a system of recurrences rather than from a single recurrence. We use the geometry of the lattice to derive the system of recurrences. This gives us an analysis with a horizon of more than one level of induction, which leads to improved results. We illustrate this idea by proving strong spatial mixing for q = 10 on the lattice Z 3 . Finally, we apply the idea to the triangular lattice, adding computational assistance. This gives us a (machine-assisted) proof of strong spatial mixing for 10-colourings of the triangular lattice. (Such a proof for 11 colours was given by Salas and Sokal.) For completeness, we also show that our strong spatial mixing proof implies rapid mixing of Glauber dynamics for sampling proper colourings of neighbourhood-amenable graphs. (It is known that strong spatial mixing often implies rapid mixing, but existing proofs seem to be written for Z d .) Thus our strong spatial mixing results give rapid mixing corollaries for neighbourhood-amenable graphs such as lattices.
In this paper we will refine the technique Goldberg, Martin and Paterson introduced in REF to show mixing on the kagome lattice for q = 5 colours.
12157996
Strong spatial mixing with fewer colours for lattice graphs
{ "venue": "Proc. 45th IEEE Symp. on Foundations of Computer Science", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Current graphics processing units (GPUs) utilize the single instruction multiple thread (SIMT) execution model. With SIMT, a group of logical threads executes such that all threads in the group execute a single common instruction on a particular cycle. To enable control flow to diverge within the group of threads, GPUs partially serialize execution and follow a single control flow path at a time. The execution of the threads in the group that are not on the current path is masked. Most current GPUs rely on a hardware reconvergence stack to track the multiple concurrent paths and to choose a single path for execution. Control flow paths are pushed onto the stack when they diverge and are popped off of the stack to enable threads to reconverge and keep lane utilization high. The stack algorithm guarantees optimal reconvergence for applications with structured control flow as it traverses the structured control-flow tree depth first. The downside of using the reconvergence stack is that only a single path is followed, which does not maximize available parallelism, degrading performance in some cases. We propose a change to the stack hardware in which the execution of two different paths can be interleaved. While this is a fundamental change to the stack concept, we show how dualpath execution can be implemented with only modest changes to current hardware and that parallelism is increased without sacrificing optimal (structured) control-flow reconvergence. We perform a detailed evaluation of a set of benchmarks with divergent control flow and demonstrate that the dual-path stack architecture is much more robust compared to previous approaches for increasing path parallelism. Dual-path execution either matches the performance of the baseline single-path stack architecture or outperforms single-path execution by 14.9% on average and by over 30% in some cases.
Rhu et al. REF suggested a dual-path stack to keep the two divergent paths of a branch in parallel.
6892687
The dual-path execution model for efficient GPU control flow
{ "venue": "2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA)", "journal": "2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Recently there has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. CDNs deploy a set of servers distributed throughout the Internet and replicate provider content across these servers for better performance and availability than centralized provider servers. Existing work on CDNs has primarily focused on techniques for efficiently redirecting user requests to appropriate CDN servers to reduce request latency and balance load. However, little attention has been given to the development of placement strategies for Web server replicas to further improve CDN performance. In this paper, we explore the problem of Web server replica placement in detail. We develop several placement algorithms that use workload information, such as client latency and request rates, to make informed placement decisions. We then evaluate the placement algorithms using both synthetic and real network topologies, as well as Web server traces, and show that the placement of Web replicas is crucial to CDN performance. We also address a number of practical issues when using these algorithms, such as their sensitivity to imperfect knowledge about client workload and network topology, the stability of the input data, and methods for obtaining the input.
Data placement is also similar to some of the ideas used in the placement of web server replicas REF .
6176605
On the placement of Web server replicas
{ "venue": "Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)", "journal": "Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)", "mag_field_of_study": [ "Computer Science" ] }
Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos. Our Parts, Structure, and Dynamics (PSD) model learns to, first, recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions.
Xu et al. REF proposed a deep model to discover object parts and the associated hierarchical structure and dynamical model from unlabeled videos.
76667896
Unsupervised Discovery of Parts, Structure, and Dynamics
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
We put forward a zero-knowledge based definition of privacy. Our notion is strictly stronger than the notion of differential privacy and is particularly attractive when modeling privacy in social networks. We furthermore demonstrate that it can be meaningfully achieved for tasks such as computing averages, fractions, histograms, and a variety of graph parameters and properties, such as average degree and distance to connectivity. Our results are obtained by establishing a connection between zero-knowledge privacy and sample complexity, and by leveraging recent sublinear time algorithms.
Zero-knowledge privacy REF is a cryptographically influenced privacy definition that is strictly stronger than differential privacy.
1585853
Towards Privacy for Social Networks: A Zero-Knowledge Based Definition of Privacy ∗
{ "venue": "TCC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Uncovering thematic structures of SNS and blog posts is a crucial yet challenging task, because of the severe data sparsity induced by the short length of texts and diverse use of vocabulary. This hinders effective topic inference of traditional LDA because it infers topics based on document-level co-occurrence of words. To robustly infer topics in such contexts, we propose a latent concept topic model (LCTM). Unlike LDA, LCTM reveals topics via co-occurrence of latent concepts, which we introduce as latent variables to capture conceptual similarity of words. More specifically, LCTM models each topic as a distribution over the latent concepts, where each latent concept is a localized Gaussian distribution over the word embedding space. Since the number of unique concepts in a corpus is often much smaller than the number of unique words, LCTM is less susceptible to the data sparsity. Experiments on the 20Newsgroups show the effectiveness of LCTM in dealing with short texts as well as the capability of the model in handling held-out documents with a high degree of OOV words.
Therefore, REF proposed latent concept topic model (LCTM), which modeled a topic as a distribution of concepts, where each concept defined another distribution of word vectors.
15778456
A Latent Concept Topic Model for Robust Topic Inference Using Word Embeddings
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Automated software testing aims to detect errors by producing test inputs that cover as much of the application source code as possible. Applications for mobile devices are typically event-driven, which raises the challenge of automatically producing event sequences that result in high coverage. Some existing approaches use random or model-based testing that largely treats the application as a black box. Other approaches use symbolic execution, either starting from the entry points of the applications or on specific event sequences. A common limitation of the existing approaches is that they often fail to reach the parts of the application code that require more complex event sequences. We propose a two-phase technique for automatically finding event sequences that reach a given target line in the application code. The first phase performs concolic execution to build summaries of the individual event handlers of the application. The second phase builds event sequences backward from the target, using the summaries together with a UI model of the application. Our experiments on a collection of open source Android applications show that this technique can successfully produce event sequences that reach challenging targets.
Jensen et al. REF propose a test generation approach to find event sequences that reach a given target line in smartphone apps.
11741816
Automated testing with targeted event sequence generation
{ "venue": "ISSTA 2013", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Simulators have played a critical role in robotics research as tools far quick and effident testing of new concepts, slrategies, and algorithms. To date, most simulators have been restricted to 2D worlds, and few have matured to the point where they are both highly capable and easily adaptable. Gazebo is designed to fill this niche by creating a 3D dynamic multi-robot environment capable of recreating the complex worlds that will be encountered by the next generation of mobile robots. Its open source stat~ls, fine grained coonlml, and high fidelity place Gazebo in a unique position to become more than just B stepping stone between the drawing board and real hardware: data visualization, simulation of remote envlronments, and wen reverse engineering of blackbox @ems are all possible applications. Gazebo The Player and Stage projects have been in development since 2001, during which time they have experienced wide spread usage in both academia and industry. Player is a networked device server, and Stage is a simulator for large populations of mobile robots in complex 2D domains. A naNra1 complement for these two projects is a high fidelity outdoor dynamics simulator; this has taken form in the Gazebo project. The development of Gazebo has been driven by the increasing use of robotic vehicles for outdoor applications. While Stage is quite capable of simulating the interactions between robots in indoor environments, the need for a simulator capable of modeling outdoor environments and providing realistic sensor feedback have become apparent. Gazebo, therefore, is designed to accurately reproduce the dynamic environments a robot may encounter. All simulated objects have mass, velocity, friction, and numerous other attributes that allow them to behave realistically when pushed, pulled, knocked over, or carried. These actions can be used as integral parts of an experiment, such as construction or foraging. The robots themselves are dynamic smcmres composed of rigid bodies connected via joints. Forces, both angula and linear, can be applied to surfaces and joints to generate locomotion and interaction with an environment. The world itself is described by landscapes, extruded buildings, and other user created objects Almost every aspect of the simulation is controllable, from lighting conditions to friction coefficients. Following the principles established by Player and Stage, Gazebo is completely open source and freely available (a major advantage over most commercially available packages). As a result, Gazebo has an active base of contributers who are rapidly evolving the package to meet their everchanging needs. Gazebo offers a rich environment to quickly develop and test multi-robot systems in new and interesting ways. It is an effective, scalable, and simple tool that has also potential for opening the field of robotics research to a wider community; thus, for example, Gazebo is being considered for use in undergraduate teaching. This paper describes the basic architecture of the Gazebo package, and illustrates its use and extensibility through a number of user case-studies. We also give some attention to future directions for this package. Gazebo bas been developed from the ground up to be fully compatible with the Player device server. The hardware simulated in Gazebo is designed to accurately reflect the behavior of its physical counterpart. As a result, a client program sees an identical interface to a real and simulated robot. This feature allows Gazebo to be seamlessly inserted into the development process of a robotic system. Even though it is compatible with Player, Gazebo is not meant as a replacement for the Stage simulator. The complexity of simulating rigid body dynamics coupled with a 3D environment can severely tax even a high performance computer. This has the effect of limiting Gazebo to the domain a few robots, currently on the order of ten. On the other hand, Stage provides a robust and efficient simulator for projects that require large robot populations or do not require the full capabilities of Gazebo. Gazebo is far from being the only choice for a 3D dynamics simulator. It is however one of the few that attempts to create realistic worlds for the robots rather than just human users. As more advanced sensors are developed and incorporated into Gazebo the line between simulation and reality will continue to blur, but accuracy in terms of robot sensors and actuators will remain an overriding goal. A few notable systems include COSIMIR [41, developed at Festo. This is a commercial package primarily designed 2149 for industrial simulation of work flows with robotic systems, but is also applicable to robotic research. COSIMR has advanced modeling and physical simulation capabilities that go well beyond the capabilities of Gazebo. It incorporates many types of grippers, the ability to program movement in non-robotic models such as assembly lines, and has tools for analysis of the simulated systems. Another commercial package is Wehots (51 created by Cyberhotics. Wehots allows for the creation of robots using a library of predefined actuators and sensors. When system testing in the simulator is complete, a user can transfer their code to real robots. The principle purpose of Webots is research and development. Cyberbotics is also developing a Player interface for compatibility with a wider range of devices. Darwin2K 161 and OpenSim 171 represent two open source robot simulators developed along similar lines as Gazebo. Darwin 2K was created by Chris Leger at Camegie Mellon University as a tool for his work on evolutionary robotics. This simulator accurately models motor and gear heads in fine detail while providing stress estimates on structural bodies. Darwin2K has a strong focus on evolutionary synthesis, design, and optimization and still remains a capable general purpose simulator for dynamic systems. OpenSim, under development by David lung, is a generic open source robot simulator similar in design an purpose to Gazebo. This simulator makes use of the same thud party software packages as Gazebo. and has some attractive features for constructing and debugging aniculated joint chains. IV. ARCHITECTURE Gazebo's architecture has progressed through a couple iterations during which we learned how to best create a simple tool for both developers and end users. We realized from the start that a major feature of Gazebo should be the ability to easily create new robots, actuators, sensors, and arbitrary objects. As a result, Gazebo maintains a simple API for addition of these objects, which we term models, and the necessary hooks for interaction with client programs. A layer below this API resides the thnd party libraries that handle both the physics simulation and vlsualization. The particular Libraries used were chosen based on their open source status, active user base, and maturity. This architecture is graphically depicted in Figure 1 . The World represents the set of all models and environmental factors such as gravity and lighting. Each model is composed of at least one body and any number of joints and senson. The lhird party libraries interface with Gazebo at the lowest level. This prevents models from becoming dependent on specific tools that may change in the future. Finally, client commands are received and data retumed through a shared memory interface. A model can have many interfaces for functions involving, for example, control of joints and transmission of camera images. The Open Dynamics Engine [8], created by Russel Smith is a widely used physics engine in the open source community. It is designed to simulate the dynamics and kinematics associated with articulated rigid bodies. This engine includes many features such as numerous joints, collision detection, mass and rotational functions, and many geometries including arbitrary triangle meshes (Figure 6 ). Gazebo utilizes these features by providing a layer of abstraction situated between ODE and Gazebo models. This layer allows easy creation of both normal and abstract objects such as laser rays and ground planes while &tailling all the functionality provided by ODE. With this internal Final Model -abstraction, it is possible to replace the underlying physics engine, should a better alternative become available. A well designed simulator usually provides some form of user interface, and Gazebo requires one that is both sophisticated and fast. The heart of Gazebo lies in its ability to simulate dynamics, and this requires significant work on behalf of the user's computer. A slow and cumbersome user interface would only detract from the simulator's primary purpose. To account for this, OpenGL and GLUT (OpenGL Utility Toolkit) [9] were chosen as the default visualization tools. OpenGL is a standard library for the creation of 2D and 3D interactive applications. It is platform independent, highly scalable, stable, and continually evolving. More importantly, many features in OpenCL have been
The Gazebo simulator REF has been used extensively in robotics research.
206941306
Design and use paradigms for Gazebo, an open-source multi-robot simulator
{ "venue": "2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566)", "journal": "2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566)", "mag_field_of_study": [ "Computer Science" ] }
Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than X-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [J. Roy. Statist. Soc. Ser. B 44 (1982) 139-177]. We derive a fast variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. We apply the CTM to the articles from Science published from 1990-1999, a data set that comprises 57M words. The CTM gives a better fit of the data than LDA, and we demonstrate its use as an exploratory tool of large document collections.
Correlated Topic Model REF replaces Dirichlet prior with logistic Normal prior for topic distribution in each document in order to capture the correlation between the topics.
8872108
A correlated topic model of Science
{ "venue": "Annals of Applied Statistics 2007, Vol. 1, No. 1, 17-35", "journal": null, "mag_field_of_study": [ "Mathematics" ] }
An improved algorithm is put forward to improve the poor locating performance of the DV-Hop algorithm, which is one of the range-free algorithms in wireless sensor networks. Firstly, we set some anchor nodes at the border land of monitoring regions. Secondly, the average one-hop distance between anchor nodes is modified, and the average one-hop distance used by each unknown node for estimating its location is modified through weighting the received average one-hop distances from anchor nodes. Finally, we use the particle swarm optimization to correct the position estimated by the 2D hyperbolic localization algorithm, which makes the result closer to the actual position. The simulation results show that the proposed algorithm has better localization performance in the localization precision and stability than the basic DV-Hop algorithm and some existing improved algorithms.
REF The improvement in locating performance has been done by modifying and weighting the average hop distance between anchor nodes.
11604163
Improved DV-Hop Node Localization Algorithm in Wireless Sensor Networks
{ "venue": null, "journal": "International Journal of Distributed Sensor Networks", "mag_field_of_study": [ "Computer Science" ] }
The success of the Semantic Web depends on the availability of ontologies as well as on the proliferation of web pages annotated with metadata conforming to these ontologies. Thus, a crucial question is where to acquire these metadata. In this paper we propose PANKOW (Pattern-based Annotation through Knowledge on the Web), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology. The approach is evaluated against the manual annotations of two human subjects. The approach is implemented in OntoMat, an annotation tool for the Semantic Web and shows very promising results.
PANKOW REF proposed a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology.
6755749
Towards the self-annotating web
{ "venue": "WWW '04", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.
DeepMask REF proposes a network to classify whether the patch contains an object and then generates a mask.
140529
Learning to Segment Object Candidates
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Many named entities contain other named entities inside them. Despite this fact, the field of named entity recognition has almost entirely ignored nested named entity recognition, but due to technological, rather than ideological reasons. In this paper, we present a new technique for recognizing nested named entities, by using a discriminative constituency parser. To train the model, we transform each sentence into a tree, with constituents for each named entity (and no other syntactic structure). We present results on both newspaper and biomedical corpora which contain nested named entities. In three out of four sets of experiments, our model outperforms a standard semi-CRF on the more traditional top-level entities. At the same time, we improve the overall F-score by up to 30% over the flat model, which is unable to recover any nested entities.
REF proposed a CRF-based constituency parser for nested named entities such that each named entity is a constituent in the parse tree.
10573012
Nested Named Entity Recognition
{ "venue": "EMNLP", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
More recently, Exponential Linear Units (ELUs) REF showed significant improvements over ReLUs.
5273326
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
{ "venue": "ICLR 2016", "journal": "arXiv: Learning", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lowerlevel components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank.
REF computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree.
15065468
One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations
{ "venue": "Transactions of the Association for Computational Linguistics", "journal": "Transactions of the Association for Computational Linguistics", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown non-rigid spatial transformation, large dimensionality of point set, noise and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and non-rigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the GMM centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by re-parametrization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the non-rigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and non-rigid transformations in the presence of noise, outliers and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.
Considering the fitting of two point clouds as a probability density estimation problem, Myronenko et al. REF proposed the Coherent Point Drift algorithm which encourages displacement vectors to point into similar directions to improve the coherence of the transformation.
10809031
Point-Set Registration: Coherent Point Drift
{ "venue": "IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, issue 12, pp. 2262-2275", "journal": null, "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract: With the increasing demand on the usage of smart and networked cameras in intelligent and ambient technology environments, development of algorithms for such resource-distributed networks are of great interest. Multi-view action recognition addresses many challenges dealing with view-invariance and occlusion, and due to the huge amount of processing and communicating data in real life applications, it is not easy to adapt these methods for use in smart camera networks. In this paper, we propose a distributed activity classification framework, in which we assume that several camera sensors are observing the scene. Each camera processes its own observations, and while communicating with other cameras, they come to an agreement about the activity class. Our method is based on recovering a low-rank matrix over consensus to perform a distributed matrix completion via convex optimization. Then, it is applied to the problem of human activity classification. We test our approach on IXMAS and MuHAVi datasets to show the performance and the feasibility of the method.
Mosabbeb et al. REF proposed a distributed system where each camera processes its own observations, and while communicating with other cameras, they come to an agreement about the activity class.
1961064
Multi-View Human Activity Recognition in Distributed Camera Sensor Networks
{ "venue": "Sensors (Basel, Switzerland)", "journal": "Sensors (Basel, Switzerland)", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
We consider AC electrical systems where each electrical device has a power demand expressed as a complex number, and there is a limit on the magnitude of total power supply. Motivated by this scenario, we introduce the complex-demand knapsack problem (C-KP), a new variation of the traditional knapsack problem, where each item is associated with a demand as a complex number, rather than a real number often interpreted as weight or size of the item. While keeping the same goal as to maximize the sum of values of the selected items, we put the capacity limit on the magnitude of the sum of satisfied demands. For C-KP, we prove its inapproximability by FPTAS (unless P = NP), as well as presenting a (1/2 − )-approximation algorithm. Furthermore, we investigate the selfish multi-agent setting where each agent is in charge of one item, and an agent may misreport the demand and value of his item for his own interest. We show a simple way to adapt our approximation algorithm to be monotone, which is sufficient for the existence of incentive compatible payments such that no agent has an incentive to misreport. Our results shed insight on the design of multi-agent systems for smart grid.
REF obtained a 1 2 -approximation for the case where 0 ≤ φ ≤ π 2 .
15287938
Complex-Demand Knapsack Problems and Incentives in AC Power Systems
{ "venue": null, "journal": "arXiv: Data Structures and Algorithms", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
The rich set of interactions between individuals in society [1] [2] [3] [4] [5] [6] [7] results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network 3, [7] [8] [9] [10] . Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution 7, [11] [12] [13] [14] [15] [16] . Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole [17] [18] [19] [20] [21] [22] . We have developed an algorithm based on clique percolation 23, 24 that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions. The data sets we consider are (1) the monthly list of articles in the Cornell University Library e-print condensed matter (cond-mat) archive spanning 142 months, with over 30,000 authors 25 , and (2) the record of phone calls between the customers of a mobile phone company spanning 52 weeks (accumulated over two-week-long periods), and containing the communication patterns of over 4 million users. Both types of collaboration events (a new article or a phone call) document the presence of social interaction between the involved individuals (nodes), and can be represented as (timedependent) links. The extraction of the changing link weights from the primary data is described in Supplementary Information. In Fig. 1a , b we show the local structure at a given time step in the two networks in the vicinity of a randomly chosen individual (marked by a red frame). The communities (social groups represented by more densely interconnected parts within a network of social links) are colour coded, so that black nodes/edges do not belong to any community, and those that simultaneously belong to two or more communities are shown in red. The two networks have rather different local structure: the collaboration network of scientists emerges as a one-mode projection of the bipartite graph between authors and papers, so it is quite dense and the overlap between communities is very significant. In contrast, in the phone-call network the communities are less interconnected and are often separated by one or more inter-community nodes/edges. Indeed, whereas the phone record captures the communication between two people, the publication record assigns to all individuals that contribute to a paper a fully connected clique. As a result, the phone data are dominated by single links, whereas the co-authorship data have many dense, highly connected neighbourhoods. Furthermore, the links in the phone network correspond to instant communication events, capturing a relationship as it happens. In contrast, the co-authorship data considered. a, The co-authorship network. The figure shows the local community structure at a given time step in the vicinity of a randomly selected node. b, As a but for the phone-call network. c, The filled black symbols correspond to the average size of the largest subset of members with the same zip-code, AEn real ae, in the phone-call communities divided by the same quantity found in random sets, AEn rand ae, as a function of the community size, s. Similarly, the open symbols show the average size of the largest subset of community members with an age falling in a three-year time window, divided by the same quantity in random sets. The error bars in both cases correspond to AEn real ae/ (AEn rand ae 1 s rand ) and AEn real ae/(AEn rand ae 2 s rand ), where s rand is the standard deviation in the case of the random sets. d, The AEn real ae/s as a function of s, for both the zip-code (filled black symbols) and the age (open symbols). e, Possible events in community evolution. f, The identification of evolving communities. The links at t (blue) and the links at t 1 1 (yellow) are merged into a joint graph (green). Any CPM community at t or t 1 1 is part of a CPM community in the joined graph, so these can be used to match the two sets of communities.
Finally, the most closely related work to ours is by Palla et al. REF , where the authors investigate the time dependence of overlapping communities on a large scale, uncovering basic relationships characterizing community evolution.
4420074
Quantifying social group evolution
{ "venue": "Nature", "journal": "Nature", "mag_field_of_study": [ "Medicine", "Biology", "Mathematics", "Physics" ] }
We present an effective immunization strategy for computer networks and populations with broad and, in particular, scale-free degree distributions. The proposed strategy, acquaintance immunization, calls for the immunization of random acquaintances of random nodes (individuals). The strategy requires no knowledge of the node degrees or any other global knowledge, as do targeted immunization strategies. We study analytically the critical threshold for complete immunization. We also study the strategy with respect to the SIR (susceptible-infected-removed) epidemiological model. We show that the immunization threshold is dramatically reduced with the suggested strategy, for all studied cases. PACS numbers: 02.50. Cw, 02.10.Ox, 89.20.Hh, 64.60.Ak It is well established that random immunization requires immunizing a very large fraction of a computer network, or population, in order to arrest epidemics that spread upon contact between infected nodes (or individuals) [1, 2, 3, 4, 5, 6, 7] . Many diseases require 80%-100% immunization (for example, Measles requires 95% of the population to be immunized [1] ). The same is correct for the Internet, where stopping computer viruses requires almost 100% immunization [5, 6, 7] . On the other hand, targeted immunization of the most highly connected individuals [1, 5, 8, 9, 10, 11] , while effective, requires global information about the network in question, rendering it impractical in many cases. Here, we develop a mathematical model and propose an effective strategy, based on the immunization of a small fraction of random acquaintances of randomly selected nodes. In this way, the most highly connected nodes are immunized, and the process prevents epidemics with a small finite immunization threshold and without requiring specific knowledge of the network. Social networks are known to possess a broad distribution of the number of links (contacts), k, emanating from a node (an individual) [12, 13, 14] . Examples are the web of sexual contacts [15] , movie-actor networks, science citations and cooperation networks [16, 17] etc. Computer networks, both physical (such as the Internet [18] ) and logical (such as the WWW [19] , and e-mail [20] and trust networks [21] ) are also known to posses wide, scale-free, distributions. Studies of percolation on broad-scale networks show that a large fraction f c of the nodes need to be removed (immunized) before the integrity of the network is compromised. This is particularly true for scalefree networks, P (k) = ck −λ (k ≥ m), where 2 < λ < 3 -the case of most known networks [12, 13, 14] -where the percolation threshold f c → 1, and the network remains connected (contagious) even after removal of most of its nodes [6] . In other words, with a random immunization strategy almost all of the nodes need to be immunized
Cohen et al. REF proposed a mathematical model and an immunization policy based on a small fraction of random acquaintances, and analytically studied the critical threshold for complete immunization.
919625
Efficient Immunization Strategies for Computer Networks and Populations
{ "venue": "Phys. Rev. Lett. 91, 247901 (2003)", "journal": null, "mag_field_of_study": [ "Physics", "Biology", "Medicine" ] }
This study examines the relationship between use of Facebook, a popular online social network site, and the formation and maintenance of social capital. In addition to assessing bonding and bridging social capital, we explore a dimension of social capital that assesses one's ability to stay connected with members of a previously inhabited community, which we call maintained social capital. Regression analyses conducted on results from a survey of undergraduate students (N = 286) suggest a strong association between use of Facebook and the three types of social capital, with the strongest relationship being to bridging social capital. In addition, Facebook usage was found to interact with measures of psychological well-being, suggesting that it might provide greater benefits for users experiencing low self-esteem and low life satisfaction.
For online social networks, Ellison et al. REF defined friends as social capital in terms of an individual's ability to stay connected with members of a previously inhabited community.
11940919
The Benefits of Facebook “Friends:” Social Capital and College Students’ Use of Online Social Network Sites
{ "venue": "J. Computer-Mediated Communication", "journal": "J. Computer-Mediated Communication", "mag_field_of_study": [ "Psychology", "Computer Science" ] }
In this paper, we propose a methodology to make Binary Decision Diagrams (BDDs) and Boolean Satisfiability (SAT) Solvers cooperate. The underlying idea is simple: We start a verification task with BDDs, we go on with them as long as the problem remains of manageable size, then we switch to SAT, without losing the work done on the BDD domain. We propose target enlargement as an attempt to bring some of the advantages of state set manipulation from BDDs to SAT-based verification. We first, "enlarge" initial and target state sets of a given verification problem by affordable BDD manipulations. This step is carried on with a few breadth-first steps of traversal, or with what we call high-density dynamic abstraction, i.e., a new technique to collect under-approximate reachable state sets. Then, we perform SAT-based verification with the newly computed "enlarged" sets. We experimentally test our methodology within an industrial environment, the Intel BOolean VErifier BOVE. Preliminary results on standard benchmarks (the ISCAS'89, ISCAS'89-addendum, and VIS suites), and industrial ones (the IBM Formal Verification Benchmark Library) are provided. Results show interesting improvements over state-of-the-art techniques: We could decrease CPU time up to a 5x factor, when performing verification with the same depth, or we could increase the verification depth up to 30%, when performing verification within the same time limit.
Bischoff et al. REF propose a methodology to use BDDs and SAT solvers for the verification of programs in a bidirectional form similar to our bkind algorithm.
1027567
Exploiting Target Enlargement and Dynamic Abstraction within Mixed BDD and SAT Invariant Checking
{ "venue": null, "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -a major concern for organizations wishing to move "to the cloud." In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with current commodity hardware, NoHype is a significant advance in the security of cloud computing.
In REF the authors explored the attack surface of modern hypervisors to evaluate the security of cloud-based applications.
15869079
Eliminating the hypervisor attack surface for a more secure cloud
{ "venue": "CCS '11", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.
Wang et al. REF sampled dense points from each frame and tracked them based on the displacement information from a dense optical flow field.
13537104
Action recognition by dense trajectories
{ "venue": "CVPR 2011", "journal": "CVPR 2011", "mag_field_of_study": [ "Computer Science" ] }
We consider neural networks with a single hidden layer and non-decreasing homogeneous activation functions like the rectified linear units. By letting the number of hidden units grow unbounded and using classical non-Euclidean regularization tools on the output weights, we provide a detailed theoretical analysis of their generalization performance, with a study of both the approximation and the estimation errors. We show in particular that they are adaptive to unknown underlying linear structures, such as the dependence on the projection of the input variables onto a low-dimensional subspace. Moreover, when using sparsity-inducing norms on the input weights, we show that high-dimensional non-linear variable selection may be achieved, without any strong assumption regarding the data and with a total number of variables potentially exponential in the number of observations. In addition, we provide a simple geometric interpretation to the non-convex problem of addition of a new unit, which is the core potentially hard computational element in the framework of learning from continuously many basis functions. We provide simple conditions for convex relaxations to achieve the same generalization error bounds, even when constant-factor approximations cannot be found (e.g., because it is NP-hard such as for the zero-homogeneous activation function). We were not able to find strong enough convex relaxations and leave open the existence or non-existence of polynomial-time algorithms.
In fact, adding even a single neuron to the model requires the solution of a non-convex problem where no efficient algorithm is known REF .
1474026
Breaking the Curse of Dimensionality with Convex Neural Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data.
DeVries and Taylor used simple transformations in the learned feature space to augment data REF .
15530352
Dataset Augmentation in Feature Space
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Effective visualization of vector fields relies on the ability to control the size and density of the underlying mapping to visual cues used to represent the field. In this paper we introduce the use of a reaction-diffusion model, already well known for its ability to form irregular spatio-temporal patters, to control the size, density, and placement of the vector field representation. We demonstrate that it is possible to encode vector field information (orientation and magnitude) into the parameters governing a reaction-diffusion model to form a spot pattern with the correct orientation, size, and density, creating an effective visualization. To encode direction we texture the spots using a light to dark fading texture. We also show that it is possible to use the reaction-diffusion model to visualize an additional scalar value, such as the uncertainty in the orientation of the vector field. An additional benefit of the reaction-diffusion visualization technique arises from its automatic density distribution. This benefit suggests using the technique to augment other vector visualization techniques. We demonstrate this utility by augmenting a LIC visualization with a reaction-diffusion visualization. Finally, the reaction-diffusion visualization method provides a technique that can be used for streamline and glyph placement.
Sanderson et al. REF created a method for visualizing vector fields while potentially presenting uncertainty by using a reaction-diffusion model to generate texture patterns with variable shapes, sizes, and orientations.
15670
Display of Vector Fields Using a Reaction-Diffusion Model
{ "venue": "IEEE Visualization 2004", "journal": "IEEE Visualization 2004", "mag_field_of_study": [ "Computer Science" ] }
Wireless 802.11 hotspots have grown in an uncoordinated fashion with highly variable deployment densities. Such uncoordinated deployments, coupled with the difficulty of implementing coordination protocols, has often led to conflicting configurations (e.g., in choice of transmission power and channel of operation) among the corresponding Access Points (APs). Overall, such conflicts cause both unpredictable network performance and unfairness among clients of neighboring hotspots. In this paper, we focus on the fairness problem for uncoordinated deployments. We study this problem from the channel assignment perspective. Our solution is based on the notion of channel-hopping, and meets all the important design considerations for control methods in uncoordinated deployments -distributed in nature, minimal to zero coordination among APs belonging to different hotspots, simple to implement, and interoperable with existing standards. In particular, we propose a specific algorithm called MAXchop, which works efficiently when using only non-overlapping wireless channels, but is particularly effective in exploiting partially-overlapped channels that have been proposed in recent literature. We also evaluate how our channel assignment approach complements previously proposed carrier sensing techniques in providing further performance improvements. Through extensive simulations on real hotspot topologies and evaluation of a full implementation of this technique, we demonstrate the efficacy of these techniques for not only fairness, but also the aggregate throughput, metrics. We believe that this is the first work that brings into focus the fairness properties of channel hopping techniques and we hope that the insights from this research will be applied to other domains where a fair division of a system's resources is an important consideration.
In REF , a channel hopping algorithm called MAXchop is proposed for uncoordinated networks to improve the fairness of resource distribution among neighboring cells.
2759775
Distributed channel management in uncoordinated wireless environments
{ "venue": "MobiCom '06", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Background: Thyroid cancer is the most common endocrine tumor with a steady increase in incidence. It is classified into multiple histopathological subtypes with potentially distinct molecular mechanisms. Identifying the most relevant genes and biological pathways reported in the thyroid cancer literature is vital for understanding of the disease and developing targeted therapeutics. Results: We developed a large-scale text mining system to generate a molecular profiling of thyroid cancer subtypes. The system first uses a subtype classification method for the thyroid cancer literature, which employs a scoring scheme to assign different subtypes to articles. We evaluated the classification method on a gold standard derived from the PubMed Supplementary Concept annotations, achieving a micro-average F1-score of 85.9% for primary subtypes. We then used the subtype classification results to extract genes and pathways associated with different thyroid cancer subtypes and successfully unveiled important genes and pathways, including some instances that are missing from current manually annotated databases or most recent review articles. Conclusions: Identification of key genes and pathways plays a central role in understanding the molecular biology of thyroid cancer. An integration of subtype context can allow prioritized screening for diagnostic biomarkers and novel molecular targeted therapeutics. Source code used for this study is made freely available online at https:// github.com/chengkun-wu/GenesThyCan.
We have used text mining to construct a molecular profiling (related genes and pathways) of thyroid cancer, classified by commonly seen subtypes REF .
2622219
Molecular profiling of thyroid cancer subtypes using large-scale text mining
{ "venue": "BMC Medical Genomics", "journal": "BMC Medical Genomics", "mag_field_of_study": [ "Medicine", "Biology" ] }
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.
A three-stage pipeline is proposed in REF .
2134321
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
{ "venue": "ICLR 2016", "journal": "arXiv: Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the non-trivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form, for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with variable number of hidden neurons. A reinforcement-learning scheme is then demonstrated, capable of either finding the ground-state or describing the unitary time evolution of complex interacting quantum systems. We show that this approach achieves very high accuracy in the description of equilibrium and dynamical properties of prototypical interacting spins models in both one and two dimensions, thus offering a new powerful tool to solve the quantum many-body problem. The wave function Ψ is the fundamental object in quantum physics and possibly the hardest to grasp in a classical world. Ψ is a monolithic mathematical quantity that contains all the information on a quantum state, be it a single particle or a complex molecule. In principle, an exponential amount of information is needed to fully encode a generic many-body quantum state. However, Nature often proves herself benevolent, and a wave function representing a physical many-body system can be typically characterized by an amount of information much smaller than the maximum capacity of the corresponding Hilbert space. A limited amount of quantum entanglement, as well as the typicality of a small number of physical states, are then the blocks on which modern approaches build upon to solve the many-body Schrödinger's equation with a limited amount of classical resources. Numerical approaches directly relying on the wave function can either sample a finite number of physically relevant configurations or perform an efficient compression of the quantum state. Stochastic approaches, like quantum Monte Carlo (QMC) methods, belong to the first category and rely on probabilistic frameworks typically demanding a positive-semidefinite wave function. [1] [2] [3] . Compression approaches instead rely on efficient representations of the wave function, and most notably in terms of matrix product states (MPS) [4] [5] [6] or more general tensor networks [7, 8] . Examples of systems where existing approaches fail are however numerous, mostly due to the sign problem in QMC [9] , and to the inefficiency of current compression approaches in high-dimensional systems. As a result, despite the striking success of these methods, a large number of unexplored regimes exist, including many interesting open problems. These encompass fundamental questions ranging from the dynamical properties of high-dimensional systems [10, 11] to the exact ground-state properties of strongly interacting fermions [12, 13] . At the heart of this lack of understanding lyes the difficulty in finding a general strategy to reduce the exponential complexity of the full many-body wave function down to its most essential features [14] . In a much broader context, the problem resides in the realm of dimensional reduction and feature extraction. Among the most successful techniques to attack these problems, artificial neural networks play a prominent role [15] . They can perform exceedingly well in a variety of contexts ranging from image and speech recognition [16] to game playing [17] . Very recently, applications of neural network to the study of physical phenomena have been introduced [18] [19] [20] . These have so-far focused on the classification of complex phases of matter, when exact sampling of configurations from these phases is possible. The challenging goal of solving a many-body problem without prior knowledge of exact samples is nonetheless still unexplored and the potential benefits of Artificial Intelligences in this task are at present substantially unknown. It appears therefore of fundamental and practical interest to understand whether an artificial neural network can modify and adapt itself to describe and analyze a quantum system. This ability could then be used to solve the quantum many-body problem in those regimes so-far inaccessible by existing exact numerical approaches. Here we introduce a representation of the wave function in terms of artificial neural networks specified by a set of internal parameters W. We present a stochasarXiv:1606.02318v1 [cond-mat.dis-nn]
Machine learning approximation is also managed to be used to speed up quantum computing kernels REF , in which Carleo and Troyer apply machine learning approximation on one of greatest challenges in quantum physics: the many-body problem, which describes the complex correlations within the many-body wave function.
206651104
Solving the Quantum Many-Body Problem with Artificial Neural Networks
{ "venue": "Science 355, 602 (2017)", "journal": null, "mag_field_of_study": [ "Medicine", "Physics" ] }