id
stringlengths
1
5
document_id
stringlengths
1
5
text_1
stringlengths
78
2.56k
text_2
stringlengths
95
23.3k
text_1_name
stringclasses
1 value
text_2_name
stringclasses
1 value
901
900
Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the *Conjugate Kernel*, CK, (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel*, NTK. Roughly, the CK and the NTK tell us respectively "what a network looks like at initialization"and "what a network looks like during and after training." Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at this http URL.
Several recent trends in machine learning theory and practice, from the design of state-of-the-art Gaussian Process to the convergence analysis of deep neural nets (DNNs) under stochastic gradient descent (SGD), have found it fruitful to study wide random neural networks. Central to these approaches are certain scaling limits of such networks. We unify these results by introducing a notion of a straightline that can express most neural network computations, and we characterize its scaling limit when its tensors are large and randomized. From our framework follows (1) the convergence of random neural networks to Gaussian processes for architectures such as recurrent neural networks, convolutional neural networks, residual networks, attention, and any combination thereof, with or without batch normalization; (2) conditions under which the -- that weights in backpropagation can be assumed to be independent from weights in the forward pass -- leads to correct computation of gradient dynamics, and corrections when it does not; (3) the convergence of the Neural Tangent Kernel, a recently proposed kernel used to predict training dynamics of neural networks under gradient descent, at initialization for all architectures in (1) without batch normalization. Mathematically, our framework is general enough to rederive classical random matrix results such as the semicircle and the Marchenko-Pastur laws, as well as recent results in neural network Jacobian singular values. We hope our work opens a way toward design of even stronger Gaussian Processes, initialization schemes to avoid gradient explosion vanishing, and deeper understanding of SGD dynamics in modern architectures.
Abstract of query paper
Cite abstracts
902
901
The health effects of air pollution have been subject to intense study in recent decades. Exposure to pollutants such as airborne particulate matter and ozone has been associated with increases in morbidity and mortality, especially with regards to respiratory and cardiovascular diseases. Unfortunately, individuals do not have readily accessible methods by which to track their exposure to pollution. This paper proposes how pollution parameters like CO, NO2, O3, PM2.5, PM10 and SO2 can be monitored for respiratory and cardiovascular personalized health during outdoor exercise events. Using location tracked activities, we synchronize them to public data sets of pollution sensors. For improved accuracy in estimation, we use heart rate data to understand breathing volume mapped with the local air quality sensors via constant GPS tracking.
Background: The power of breathing (PoB) is used to estimate the mechanical workload of the respiratory system. Aim of this study was to investigate the effect of different tidal volume-respiratory rate combinations on the PoB when the elastic load is constant. In order to assure strict control of the experimental conditions, the PoB was calculated on an airway pressure-volume curve in mechanically ventilated patients. Methods: Ten patients received three different tidal volume-respiratory rate combinations while minute ventilation was constant. Respiratory mechanics, PoB and its elastic and resistive components were calculated. Alternative methods to estimate the elastic workload were assessed: elastic work of breathing per litre per minute, elastic workload index (the square root of elastic work of breathing multiplied by respiratory rate) and elastic double product of the respiratory system (the elastic pressure multiplied by respiratory rate). Results: Despite constant elastance and minute ventilation, the elastic PoB showed an increment greater than 200 from the lower to the greater tidal volume, accounting for approximately 80 of the whole PoB increment. On the contrary, elastic work of breathing per litre per minute, elastic workload index and elastic double product did not change. Conclusion: Changes in breathing pattern markedly affect the PoB despite constant mechanical load. Other indexes could assess the elastic workload without tidal volume dependence. Power of breathing use should be avoided to compare different mechanical loads or efficiencies of the respiratory muscles when tidal volume is variable. Abstract Traditional approaches to mechanical ventilation use tidal volumes of 10 to 15 ml per kilogram of body weight and may cause stretch-induced lung injury in patients with acute lung injury and the acute respiratory distress syndrome. We therefore conducted a trial to determine whether ventilation with lower tidal volumes would improve the clinical outcomes in these patients. Patients with acute lung injury and the acute respiratory distress syndrome were enrolled in a multicenter, randomized trial. The trial compared traditional ventilation treatment, which involved an initial tidal volume of 12 ml per kilogram of predicted body weight and an airway pressure measured after a 0.5-second pause at the end of inspiration (plateau pressure) of 50 cm of water or less, with ventilation with a lower tidal volume, which involved an initial tidal volume of 6 ml per kilogram of predicted body weight and a plateau pressure of 30 cm of water or less. The primary outcomes were death before a patient was discharged home and was breathing without assistance and the number of days without ventilator use from day 1 to day 28. The trial was stopped after the enrollment of 861 patients because mortality was lower in the group treated with lower tidal volumes than in the group treated with traditional tidal volumes (31.0 percent vs. 39.8 percent, P=0.007), and the number of days without ventilator use during the first 28 days after randomization was greater in this group (mean [+ -SD], 12+ -11 vs. 10+ -11; P=0.007). The mean tidal volumes on days 1 to 3 were 6.2+ -0.8 and 11.8+ -0.8 ml per kilogram of predicted body weight (P
Abstract of query paper
Cite abstracts
903
902
The health effects of air pollution have been subject to intense study in recent decades. Exposure to pollutants such as airborne particulate matter and ozone has been associated with increases in morbidity and mortality, especially with regards to respiratory and cardiovascular diseases. Unfortunately, individuals do not have readily accessible methods by which to track their exposure to pollution. This paper proposes how pollution parameters like CO, NO2, O3, PM2.5, PM10 and SO2 can be monitored for respiratory and cardiovascular personalized health during outdoor exercise events. Using location tracked activities, we synchronize them to public data sets of pollution sensors. For improved accuracy in estimation, we use heart rate data to understand breathing volume mapped with the local air quality sensors via constant GPS tracking.
Abstract Background Although the health effects of long term exposure to air pollution are well established, it is difficult to effectively communicate the health risks of this (largely invisible) risk factor to the public and policy makers. The purpose of this study is to develop a method that expresses the health effects of air pollution in an equivalent number of daily passively smoked cigarettes. Methods Defined changes in PM2.5, nitrogen dioxide (NO 2 ) and Black Carbon (BC) concentration were expressed into number of passively smoked cigarettes, based on equivalent health risks for four outcome measures: Low Birth Weight ( 1 ), cardiovascular mortality and lung cancer. To describe the strength of the relationship with ETS and air pollutants, we summarized the epidemiological literature using published or new meta-analyses. Results Realistic increments of 10 µg m 3 in PM2.5 and NO 2 concentration and a 1 µg m 3 increment in BC concentration correspond to on average (standard error in parentheses) 5.5 (1.6), 2.5 (0.6) and 4.0 (1.2) passively smoked cigarettes per day across the four health endpoints, respectively. The uncertainty reflects differences in equivalence between the health endpoints and uncertainty in the concentration response functions. The health risk of living along a major freeway in Amsterdam is, compared to a counterfactual situation with ‘clean’ air, equivalent to 10 daily passively smoked cigarettes.. Conclusions We developed a method that expresses the health risks of air pollution and the health benefits of better air quality in a simple, appealing manner. The method can be used both at the national regional and the local level. Evaluation of the usefulness of the method as a communication tool is needed. Combustion emissions adversely impact air quality and human health. A multiscale air quality model is applied to assess the health impacts of major emissions sectors in United States. Emissions are classified according to six different sources: electric power generation, industry, commercial and residential sources, road transportation, marine transportation and rail transportation. Epidemiological evidence is used to relate long-term population exposure to sector-induced changes in the concentrations of PM2.5 and ozone to incidences of premature death. Total combustion emissions in the U.S. account for about 200,000 (90 CI: 90,000e362,000) premature deaths per year in the U.S. due to changes in PM2.5 concentrations, and about 10,000 (90 CI: " 1000 to 21,000) deaths due to changes in ozone concentrations. The largest contributors for both pollutant-related mortalities are road transportation, causing w53,000 (90 CI: 24,000e95,000) PM2.5-related deaths and w5000 (90 CI: " 900 to 11,000) ozonerelated early deaths per year, and power generation, causing w52,000 (90 CI: 23,000e94,000) PM2.5related and w2000 (90 CI: " 300 to 4000) ozone-related premature mortalities per year. Industrial emissions contribute to w41,000 (90 CI: 18,000e74,000) early deaths from PM2.5 and w2000 (90 CI: 0 e4000) early deaths from ozone. The results are indicative of the extent to which policy measures could be undertaken in order to mitigate the impact of specific emissions from different sectors d in particular black carbon emissions from road transportation and sulfur dioxide emissions from power generation.
Abstract of query paper
Cite abstracts
904
903
This paper proposes a method to guide tensor factorization, using class labels. Furthermore, it shows the advantages of using the proposed method in identifying nodes that play a special role in multi-relational networks, e.g. spammers. Most complex systems involve multiple types of relationships and interactions among entities. Combining information from different relationships may be crucial for various prediction tasks. Instead of creating distinct prediction models for each type of relationship, in this paper we present a tensor factorization approach based on RESCAL, which collectively exploits all existing relations. We extend RESCAL to produce a semi-supervised factorization method that combines a classification error term with the standard factor optimization process. The coupled optimization approach, models the tensorial data assimilating observed information from all the relations, while also taking into account classification performance. Our evaluation on real-world social network data shows that incorporating supervision, when available, leads to models that are more accurate.
The constant advances in sequencing technology have redefined the way genome sequencing is performed. They are able to produce tens of millions of short sequences (reads), during a single experiment, and with a much lower cost than previously possible. Due to this massive amount of data, efficient algorithms for mapping these reads to reference sequences are in great demand, and recently, there has been ample work for publishing such algorithms. In this paper, we study a different version of this problem: mapping these reads to a dynamically changing genomic sequence. We propose a new practical algorithm, which employs a suitable data structure that takes into account potential dynamic effects (replacements, insertions, deletions) on the genomic sequence. The presented experimental results demonstrate that the proposed approach can be applied to address the problem of mapping millions of reads to multiple genomic sequences.
Abstract of query paper
Cite abstracts
905
904
This paper proposes a method to guide tensor factorization, using class labels. Furthermore, it shows the advantages of using the proposed method in identifying nodes that play a special role in multi-relational networks, e.g. spammers. Most complex systems involve multiple types of relationships and interactions among entities. Combining information from different relationships may be crucial for various prediction tasks. Instead of creating distinct prediction models for each type of relationship, in this paper we present a tensor factorization approach based on RESCAL, which collectively exploits all existing relations. We extend RESCAL to produce a semi-supervised factorization method that combines a classification error term with the standard factor optimization process. The coupled optimization approach, models the tensorial data assimilating observed information from all the relations, while also taking into account classification performance. Our evaluation on real-world social network data shows that incorporating supervision, when available, leads to models that are more accurate.
We review the method of Parallel Factor Analysis, which simultaneously fits multiple two-way arrays or ‘slices’ of a three-way array in terms of a common set of factors with differing relative weights in each ‘slice’. Mathematically, it is a straightforward generalization of the bilinear model of factor (or component) analysis (xij = ΣRr = 1airbjr) to a trilinear model (xijk = ΣRr = 1airbjrckr). Despite this simplicity, it has an important property not possessed by the two-way model: if the latent factors show adequately distinct patterns of three-way variation, the model is fully identified; the orientation of factors is uniquely determined by minimizing residual error, eliminating the need for a separate ‘rotation’ phase of analysis. The model can be used several ways. It can be directly fit to a three-way array of observations with (possibly incomplete) factorial structure, or it can be indirectly fit to the original observations by fitting a set of covariance matrices computed from the observations, with each matrix corresponding to a two-way subset of the data. Even more generally, one can simultaneously analyze covariance matrices computed from different samples, perhaps corresponding to different treatment groups, different kinds of cases, data from different studies, etc. To demonstrate the method we analyze data from an experiment on right vs. left cerebral hemispheric control of the hands during various tasks. The factors found appear to correspond to the causal influences manipulated in the experiment, revealing their patterns of influence in all three ways of the data. Several generalizations of the parallel factor analysis model are currently under development, including ones that combine parallel factors with Tucker-like factor ‘interactions’. Of key importance is the need to increase the method's robustness against nonstationary factor structures and qualitative (nonproportional) factor change. Tag recommendation is the task of predicting a personalized list of tags for a user given an item. This is important for many websites with tagging capabilities like last.fm or delicious. In this paper, we propose a method for tag recommendation based on tensor factorization (TF). In contrast to other TF methods like higher order singular value decomposition (HOSVD), our method RTF ('ranking with tensor factorization') directly optimizes the factorization model for the best personalized ranking. RTF handles missing values and learns from pairwise ranking constraints. Our optimization criterion for TF is motivated by a detailed analysis of the problem and of interpretation schemes for the observed data in tagging systems. In all, RTF directly optimizes for the actual problem using a correct interpretation of the data. We provide a gradient descent algorithm to solve our optimization problem. We also provide an improved learning and prediction method with runtime complexity analysis for RTF. The prediction runtime of RTF is independent of the number of observations and only depends on the factorization dimensions. Besides the theoretical analysis, we empirically show that our method outperforms other state-of-the-art tag recommendation methods like FolkRank, PageRank and HOSVD both in quality and prediction runtime. The Semantic Web fosters novel applications targeting a more efficient and satisfying exploitation of the data available on the web, e.g. faceted browsing of linked open data. Large amounts and high diversity of knowledge in the Semantic Web pose the challenging question of appropriate relevance ranking for producing fine-grained and rich descriptions of the available data, e.g. to guide the user along most promising knowledge aspects. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between resources and predicates (i.e. support for link semantics in the linked data model). In this paper, we present TripleRank, a novel approach for faceted authority ranking in the context of RDF knowledge bases. TripleRank captures the additional latent semantics of Semantic Web data by means of statistical methods in order to produce richer descriptions of the available data. We model the Semantic Web by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic links. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to Web authority ranking with HITS. The result are groupings of resources and predicates that characterize their authority and navigational (hub) properties with respect to identified topics. We have applied TripleRank to multiple data sets from the linked open data community and gathered encouraging feedback in a user evaluation where TripleRank results have been exploited in a faceted browsing scenario.
Abstract of query paper
Cite abstracts
906
905
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
In the pickup and delivery problem with time windows (PDPTW), vehicles have to transport loads from origins to destinations respecting capacity and time constraints. In this paper, we present a two-phase method to solve the PDPTW. In the first phase, we apply a novel construction heuristics to generate an initial solution. In the second phase, a tabu search method is proposed to improve the solution. Another contribution of this paper is a strategy to generate good problem instances and benchmarking solutions for PDPTW, based on Solomon's benchmark test cases for VRPTW. Experimental results show that our approach yields very good solutions when compared with the benchmarking solutions. The concept of public logistics terminals (multi-company distribution centers) has been proposed in Japan to help alleviate traffic congestion, environment, energy and labor costs. These facilities allow more efficient logistics systems to be established and they facilitate the implementation of advanced information systems and cooperative freight systems. This paper describes a mathematical model developed for determining the optimal size and location of public logistics terminals. Queuing theory and nonlinear programming techniques are used to determine the best solution. The model explicitly takes into account traffic conditions in the network and was successfully applied to an actual road network in the Kyoto-Osaka area in Japan. This paper is the second part of a comprehensive survey on routing problems involving pickups and deliveries. Basically, two problem classes can be distinguished. The first part dealt with the transportation of goods from the depot to linehaul customers and from backhaul customers to the depot. The second part now considers all those problems where goods are transported between pickup and delivery locations, denoted as Vehicle Routing Problems with Pickups and Deliveries (VRPPD). These are the Pickup and Delivery Vehicle Routing Problem (PDVRP – unpaired pickup and delivery points), the classical Pickup and Delivery Problem (PDP – paired pickup and delivery points), and the Dial-A-Ride Problem (DARP – passenger transportation between paired pickup and delivery points and user inconvenience taken into consideration). Single as well as multi vehicle mathematical problem formulations for all three VRPPD types are given, and the respective exact, heuristic, and metaheuristic solution methods are discussed. In pickup and delivery problems vehicles have to transport loads from origins to destinations without transshipment at intermediate locations. In this paper, we discuss several characteristics that distinguish them from standard vehicle routing problems and present a survey of the problem types and solution methods found in the literature. In this paper, we introduce the Intersection Graph Method for solving the AGV Flow Path Optimization Model developed by Kaspi and Tanchoco (1990). A branch-and-bound procedure is described wherein only a reduced subset of all nodes in the flow path network is considered. Only intersection nodes are used to obtain optimal solutions. Two examples are given to illustrate the proposed method.
Abstract of query paper
Cite abstracts
907
906
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
We consider a graph with n vertices, and p<n pebbles of m colors. A pebble move consists of transferring a pebble from its current host vertex to an adjacent unoccupied vertex. The problem is to move the pebbles to a given new color arrangement. An algorithm for path planning to a goal with a mobile robot in an unknown environment is presented. The robot maps the environment only to the extent necessary to achieve the goal. Mapping is achieved using tactile sensing while the robot is executing a path to the specified goal. Paths are generated by treating unknown regions in the environment as free space. As obstacles are encountered en route to a goal, the model of the environment is updated and a new path to the goal is planned and executed. Initially the paths to the goal generated by this algorithm will be negotiable paths. However, as the robot acquires more knowledge about the environment, the length of the planned paths will be optimized. The optimization criteria can be modified to favor or avoid unexplored regions in the environment. The algorithm makes use of the quadtree data structure to model the environment and uses the distance transform methodology to generate paths for the robot to execute. > In this paper we address the demand for flexibility and economic efficiency in industrial autonomous guided vehicle (AGV) systems by the use of cloud computing. We propose a cloud-based architecture that moves parts of mapping, localization and path planning tasks to a cloud server. We use a cooperative longterm Simultaneous Localization and Mapping (SLAM) approach which merges environment perception of stationary sensors and mobile robots into a central Holistic Environment Model (HEM). Further, we deploy a hierarchical cooperative path planning approach using Conflict-Based Search (CBS) to find optimal sets of paths which are then provided to the mobile robots. For communication we utilize the Manufacturing Service Bus (MSB) which is a component of the manufacturing cloud platform Virtual Fort Knox (VFK). We demonstrate the feasibility of this approach in a real-life industrial scenario. Additionally, we evaluate the system's communication and the planner for various numbers of agents. One of the standing challenges in multi-robot systems is the ability to reliably coordinate motions of multiple robots in environments where the robots are subject to disturbances. We consider disturbances that force the robot to temporarily stop and delay its advancement along its planned trajectory which can be used to model, e.g., passing-by humans for whom the robots have to yield. Although reactive collision-avoidance methods are often used in this context, they may lead to deadlocks between robots. We design a multi-robot control strategy for executing coordinated trajectories computed by a multi-robot trajectory planner and give a proof that the strategy is safe and deadlock-free even when robots are subject to delaying disturbances. Our simulations show that the proposed strategy scales significantly better with the intensity of disturbances than the naive liveness-preserving approach. The empirical results further confirm that the proposed approach is more reliable and also more efficient than state-of-the-art reactive techniques. Cooperative pathfinding is a problem of finding a set of non-conflicting trajectories for a number of mobile agents. Its applications include planning for teams of mobile robots, such as autonomous aircrafts, cars, or underwater vehicles. The state-of-the-art algorithms for cooperative pathfinding typically rely on some heuristic forward-search pathfinding technique, where A* is often the algorithm of choice. Here, we propose MA-RRT*, a novel algorithm for multi-agent path planning that builds upon a recently proposed asymptotically-optimal sampling-based algorithm for finding single-agent shortest path called RRT*. We experimentally evaluate the performance of the algorithm and show that the sampling-based approach offers better scalability than the classical forward-search approach in relatively large, but sparse environments, which are typical in real-world applications such as multi-aircraft collision avoidance. In this paper, we study the structure and computational complexity of optimal multi-robot path planning problems on graphs. Our results encompass three formulations of the discrete multi-robot path planning problem, including a variant that allows synchronous rotations of robots along fully occupied, disjoint cycles on the graph. Allowing rotation of robots provides a more natural model for multi-robot path planning because robots can communicate. Our optimality objectives are to minimize the total arrival time, the makespan (last arrival time), and the total distance. On the structure side, we show that, in general, these objectives demonstrate a pairwise Pareto optimal structure and cannot be simultaneously optimized. On the computational complexity side, we extend previous work and show that, regardless of the underlying multi-robot path planning problem, these objectives are all intractable to compute. In particular, our NP-hardness proof for the time optimal versions, based on a minimal and direct reduction from the 3-satisfiability problem, shows that these problems remain NP-hard even when there are only two groups of robots (i.e. robots within each group are interchangeable). We address the problem of optimal pathfinding for multiple agents. Given a start state and a goal state for each of the agents, the task is to find minimal paths for the different agents while avoiding collisions. Previous work on solving this problem optimally, used traditional single-agent search variants of the A* algorithm. We present a novel formalization for this problem which includes a search tree called the increasing cost tree (ICT) and a corresponding search algorithm, called the increasing cost tree search (ICTS) that finds optimal solutions. ICTS is a two-level search algorithm. The high-level phase of ICTS searches the increasing cost tree for a set of costs (cost per agent). The low-level phase of ICTS searches for a valid path for every agent that is constrained to have the same cost as given by the high-level phase. We analyze this new formalization, compare it to the A* search formalization and provide the pros and cons of each. Following, we show how the unique formalization of ICTS allows even further pruning of the state space by grouping small sets of agents and identifying unsolvable combinations of costs. Experimental results on various domains show the benefits and limitations of our new approach. A speedup of up to 3 orders of magnitude was obtained in some cases. Autonomous robot teams that simultaneously dispatch transportation tasks are playing more and more an important role in present logistic centers and manufacturing plants. In this paper we consider the problem of robot motion planning for large robot teams in the industrial domain. We present adaptive road map optimization (ARMO) that is capable of adapting the road map whenever the environment has changed. Based on linear programming, ARMO computes an optimal road map configuration according to environmental constraints (including human whereabouts) and the demand for transportation tasks from loading stations in the plant. For detecting dynamic changes, the environment is described by a grid map augmented with a hidden Markov model (HMM). We show experimentally that ARMO outperforms decoupled planning in terms of computation time and time needed for task completion.
Abstract of query paper
Cite abstracts
908
907
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
We study the TAPF (combined target-assignment and path-finding) problem for teams of agents in known terrain, which generalizes both the anonymous and non-anonymous multi-agent path-finding problems. Each of the teams is given the same number of targets as there are agents in the team. Each agent has to move to exactly one target given to its team such that all targets are visited. The TAPF problem is to first assign agents to targets and then plan collision-free paths for the agents to their targets in a way such that the makespan is minimized. We present the CBM (Conflict-Based Min-Cost-Flow) algorithm, a hierarchical algorithm that solves TAPF instances optimally by combining ideas from anonymous and non-anonymous multi-agent path-finding algorithms. On the low level, CBM uses a min-cost max-flow algorithm on a time-expanded network to assign all agents in a single team to targets and plan their paths. On the high level, CBM uses conflict-based search to resolve collisions among agents in different teams. Theoretically, we prove that CBM is correct, complete and optimal. Experimentally, we show the scalability of CBM to TAPF instances with dozens of teams and hundreds of agents and adapt it to a simulated warehouse system.
Abstract of query paper
Cite abstracts
909
908
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
Genetic Algorithms (GAs) can efficiently produce high quality results for hard combinatorial real world problems such as the Vehicle Routing Problem (VRP). Genetic Vehicle Representation (GVR), a recent approach to solving instances of the VRP with a GA, produces competitive or superior results to the standard benchmark problems. This work extends GVR research by presenting a more precise mathematical model of GVR than in previous works and a thorough comparison of GVR to Path Based Representation approaches. A suite of metrics that measures GVR's efficiency and effectiveness provides an adequate characterization of the jagged search landscape. A new variation of a crossover operator is introduced. A previously unmentioned insight about the convergence rate of the search is also noted that is especially important to the application of a priori and dynamic routing for swarms of Unmanned Aerial Vehicles (UAVs). Results indicate that the search is robust, and it exponentially drives toward high quality solutions in relatively short time. Consequently, a GA with GVR encoding is capable of providing a state-of-the-art engine for a UAV routing system or related application.
Abstract of query paper
Cite abstracts
910
909
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
Existing approaches to multirobot coordination separate scheduling and task allocation, but finding the optimal schedule with joint tasks and spatial constraints requires robots to simultaneously solve the scheduling, task allocation, and path planning problems. We present a formal description of the multirobot joint task allocation problem with heterogeneous capabilities and spatial constraints and an instantiation of the problem for the search and rescue domain. We introduce a novel declarative framework for modeling the problem as a mixed integer linear programming (MILP) problem and present a centralized anytime algorithm with error bounds. We demonstrate that our algorithm can outperform standard MILP solving techniques, greedy heuristics, and a market based approach which separates scheduling and task allocation.
Abstract of query paper
Cite abstracts
911
910
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
We consider the multi-robot task allocation (MRTA) problem in an initially unknown environment. The objective of the MRTA problem is to find a schedule or sequence of tasks that should be performed by a set of robots so that the cost or energy expended by the robots is minimized. Existing solutions for the MRTA problem mainly concentrate on finding an efficient task allocation among robots, without directly incorporating changes to tasks’ costs originating from changes in robots’ paths due to dynamically detected obstacles while moving between tasks. Dynamically updating path costs is an important aspect as changing path costs can alter the task sequence for robots that corresponds to the minimum cost. In this paper, we attempt to address this problem by developing an algorithm called MRTA-RTPP (MRTA with Real-time Path Planning) by integrating a greedy MRTA algorithm for task planning with a Field D*-based path planning algorithm. Our technique is capable of handling dynamic changes in a robot’s path costs due to static as well as mobile obstacles and computes a new task schedule if the original schedule is no longer optimal due to the robots’ replanned paths. We have verified our proposed technique on physical Corobot robots that perform surveillancelike tasks by visiting a set of locations. Our experimental results show that that our MRTA technique is able to handle dynamic path changes while reducing the cost of the schedule to the robots.
Abstract of query paper
Cite abstracts
912
911
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multi-agent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm. Robotics technology has recently matured sufficiently to deploy autonomous robotic systems for daily use in several applications: from disaster response to environmental monitoring and logistics. In such applications, robots must establish collaborative interactions so to achieve their individual and collective goals and a key problem is for robots to make individual decisions so to optimize a system wide objective function. This problem is typically referred to as coordination. In this paper, we first describe modern optimization techniques for coordination in multiRobot systems. Specifically, we focus on approaches that are based on algorithms widely used to solve graphical models and constraint optimization problems, such as the max-sum algorithm. We then analyze the coordination problem faced by a set of robots operating in a warehouse logistic application. In this context robots must transport items from loading to unloading bays so to complete packages to be delivered to customers. Robots must cooperate to maximize the number of packages completed in the unit of time. To this end a crucial component is to avoid interferences when moving in the environment. We show how such problem can be formalized as a Distributed Constrained Optimization problem and we provide a solution based on the binary max-sum algorithm. Finally, we provide a quantitative evaluation of our approach in a simulated scenario using standard robotics tools (ROS and Gazebo).
Abstract of query paper
Cite abstracts
913
912
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
We present a hierarchical planning system and its application to robotic manipulation. The novel features of the system are: 1) it finds high-quality kinematic solutions to task-level problems; 2) it takes advantage of subtask-specific irrelevance information, reusing optimal solutions to state-abstracted sub-problems across the search space. We briefly describe how the system handles uncertainty during plan execution, and present results on discrete problems as well as pick-and-place tasks for a mobile robot.
Abstract of query paper
Cite abstracts
914
913
Comments on social media are very diverse, in terms of content, style and vocabulary, which make generating comments much more challenging than other existing natural language generation (NLG) tasks. Besides, since different user has different expression habits, it is necessary to take the user's profile into consideration when generating comments. In this paper, we introduce the task of automatic generation of personalized comment (AGPC) for social media. Based on tens of thousands of users' real comments and corresponding user profiles on weibo, we propose Personalized Comment Generation Network (PCGN) for AGPC. The model utilizes user feature embedding with a gated memory and attends to user description to model personality of users. In addition, external user representation is taken into consideration during the decoding to enhance the comments generation. Experimental results show that our model can generate natural, human-like and personalized comments.
Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. We introduce the task of automatic live commenting. Live commenting, which is also called video barrage', is an emerging feature on online video sites that allows real-time comments from viewers to fly across the screen like bullets or roll at the right side of the screen. The live comments are a mixture of opinions for the video and the chit chats with other comments. Automatic live commenting requires AI agents to comprehend the videos and interact with human viewers who also make the comments, so it is a good testbed of an AI agent's ability of dealing with both dynamic vision and language. In this work, we construct a large-scale live comment dataset with 2,361 videos and 895,929 live comments. Then, we introduce two neural models to generate live comments based on the visual and textual contexts, which achieve better performance than previous neural baselines such as the sequence-to-sequence model. Finally, we provide a retrieval-based evaluation protocol for automatic live commenting where the model is asked to sort a set of candidate comments based on the log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank. Putting it all together, we demonstrate the first LiveBot'. We propose an end-to-end, domain-independent neural encoder-aligner-decoder model for selective generation, i.e., the joint task of content selection and surface realization. Our model first encodes a full set of over-determined database event records via an LSTM-based recurrent neural network, then utilizes a novel coarse-to-fine aligner to identify the small subset of salient records to talk about, and finally employs a decoder to generate free-form descriptions of the aligned, selected records. Our model achieves the best selection and generation results reported to-date (with 59 relative improvement in generation) on the benchmark WeatherGov dataset, despite using no specialized features or linguistic resources. Using an improved k-nearest neighbor beam filter helps further. We also perform a series of ablations and visualizations to elucidate the contributions of our key model components. Lastly, we evaluate the generalizability of our model on the RoboCup dataset, and get results that are competitive with or better than the state-of-the-art, despite being severely data-starved.
Abstract of query paper
Cite abstracts
915
914
How to perform effective information fusion of different modalities is a core factor in boosting the performance of RGBT tracking. This paper presents a novel deep fusion algorithm based on the representations from an end-to-end trained convolutional neural network. To deploy the complementarity of features of all layers, we propose a recursive strategy to densely aggregate these features that yield robust representations of target objects in each modality. In different modalities, we propose to prune the densely aggregated features of all modalities in a collaborative way. In a specific, we employ the operations of global average pooling and weighted random selection to perform channel scoring and selection, which could remove redundant and noisy features to achieve more robust feature representation. Experimental results on two RGBT tracking benchmark datasets suggest that our tracker achieves clear state-of-the-art against other RGB and RGBT tracking methods.
Information from multiple heterogenous data sources (e.g. visible and infrared) or representations (e.g. intensity and edge) have become increasingly important in many video-based applications. Fusion of information from these sources is critical to improve the robustness of related visual information processing systems. In this paper we propose a data fusion approach via sparse representation with applications to robust visual tracking. Specifically, the image patches from different sources of each target candidate are concatenated into a one-dimensional vector that is then sparsely represented in the target template space. The template space representation, which naturally fuses information from different sources, brings several benefits to visual tracking. First, it inherits robustness to appearance contaminations from the previously proposed sparse trackers. Second, it provides a flexible framework that can easily integrate information from different data sources. Third, it can be used for handling various number of data sources, which is very useful for situations where the data inputs arrive at different frequencies. The sparsity in the representation is achieved by solving an l1-regularized least squares problem. The tracking result is then determined by finding the candidate with the smallest approximation error. To propagate the results over time, the sparse solution is combined with the Bayesian state inference framework using the particle filter algorithm. We conducted experiments on several real videos with heterogeneous information sources. The results show that the proposed approach can track the target more robustly than several state-of-the-art tracking algorithms. Due to the complementary benefits of visible (RGB) and thermal infrared (T) data, RGB-T object tracking attracts more and more attention recently for boosting the performance under adverse illumination conditions. Existing RGB-T tracking methods usually localize a target object with a bounding box, in which the trackers or detectors is often affected by the inclusion of background clutter. To address this problem, this paper presents a novel approach to suppress background effects for RGB-T tracking. Our approach relies on a novel cross-modal manifold ranking algorithm. First, we integrate the soft cross-modality consistency into the ranking model which allows the sparse inconsistency to account for the different properties between these two modalities. Second, we propose an optimal query learning method to handle label noises of queries. In particular, we introduce an intermediate variable to represent the optimal labels, and formulate it as a (l_1 )-optimization based sparse learning problem. Moreover, we propose a single unified optimization algorithm to solve the proposed model with stable and efficient convergence behavior. Finally, the ranking results are incorporated into the patch-based object features to address the background effects, and the structured SVM is then adopted to perform RGB-T tracking. Extensive experiments suggest that the proposed approach performs well against the state-of-the-art methods on large-scale benchmark datasets.
Abstract of query paper
Cite abstracts
916
915
How to perform effective information fusion of different modalities is a core factor in boosting the performance of RGBT tracking. This paper presents a novel deep fusion algorithm based on the representations from an end-to-end trained convolutional neural network. To deploy the complementarity of features of all layers, we propose a recursive strategy to densely aggregate these features that yield robust representations of target objects in each modality. In different modalities, we propose to prune the densely aggregated features of all modalities in a collaborative way. In a specific, we employ the operations of global average pooling and weighted random selection to perform channel scoring and selection, which could remove redundant and noisy features to achieve more robust feature representation. Experimental results on two RGBT tracking benchmark datasets suggest that our tracker achieves clear state-of-the-art against other RGB and RGBT tracking methods.
Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers. In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0 relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0 AUC on OTB-2015. Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ... This paper investigates how to integrate the complementary information from RGB and thermal (RGB-T) sources for object tracking. We propose a novel Convolutional Neural Network (ConvNet) architecture, including a two-stream ConvNet and a FusionNet, to achieve adaptive fusion of different source data for robust RGB-T tracking. Both RGB and thermal streams extract generic semantic information of the target object. In particular, the thermal stream is pre-trained on the ImageNet dataset to encode rich semantic information, and then fine-tuned using thermal images to capture the specific properties of thermal information. For adaptive fusion of different modalities while avoiding redundant noises, the FusionNet is employed to select most discriminative feature maps from the outputs of the two-stream ConvNet, and updated online to adapt to appearance variations of the target object. Finally, the object locations are efficiently predicted by applying the multi-channel correlation filter on the fused feature maps. Extensive experiments on the recently public benchmark GTOT verify the effectiveness of the proposed approach against other state-of-the-art RGB-T trackers.
Abstract of query paper
Cite abstracts
917
916
Abstract Audit logs serve as a critical component in enterprise business systems and are used for auditing, storing, and tracking changes made to the data. However, audit logs are vulnerable to a series of attacks enabling adversaries to tamper data and corresponding audit logs without getting detected. Among them, two well-known attacks are “the physical access attack,” which exploits root privileges, and “the remote vulnerability attack,” which compromises known vulnerabilities in database systems. In this paper, we present BlockAudit: a scalable and tamper-proof system that leverages the design properties of audit logs and security guarantees of blockchain to enable secure and trustworthy audit logs. Towards that, we construct the design schema of BlockAudit and outline its functional and operational procedures. We implement our design on a custom-built Practical Byzantine Fault Tolerance (PBFT) blockchain system and evaluate the performance in terms of latency, network size, payload size, and transaction rate. Our results show that conventional audit logs can seamlessly transition into BlockAudit to achieve higher security and defend against the known attacks on audit logs.
Privacy audit logs are used to capture the actions of participants in a data sharing environment in order for auditors to check compliance with privacy policies. However, collusion may occur between the auditors and participants to obfuscate actions that should be recorded in the audit logs. In this paper, we propose a Linked Data based method of utilizing blockchain technology to create tamper-proof audit logs that provide proof of log manipulation and non-repudiation. We also provide experimental validation of the scalability of our solution using an existing Linked Data privacy audit log model. Several applications require robust and tamper-proof logging systems, e.g. electronic voting or bank information systems. At Scytl we use a technology, called immutable logs, that we deploy in our electronic voting solutions. This technology ensures the integrity, authenticity and non-repudiation of the generated logs, thus in case of any event the auditors can use them to investigate the issue. As a security recommendation it is advisable to store and or replicate the information logged in a location where the logger has no writing or modification permissions. Otherwise, if the logger gets compromised, the data previously generated could be truncated or altered using the same private keys. This approach is costly and does not protect against collusion between the logger and the entities that hold the replicated data. In order to tackle these issues, in this article we present a proposal and implementation to immutabilize integrity proofs of the secure logs within the Bitcoin’s blockchain. Due to the properties of the proposal, the integrity of the immutabilized logs is guaranteed without performing log data replication and even in case the logger gets latterly compromised. On an EU level, the topic of electronic health data is a high priority. Many projects have been developed to realise a standard health data format to share information on a regional, national or EU level. All the projects favour and contribute to the development and improvement of the prerequisites for intra- and cross-border patient mobility. This work presents a new approach for the implementation of disruptive logging: an audit mechanism for cross-border exchange of eHealth data on OpenNCP, providing traceability and liability support within the OpenNCP infrastructure. Relevant parties could be legally obliged to keep a log of all privacy-critical operations performed by OpenNCP users.
Abstract of query paper
Cite abstracts
918
917
Crowdsourcing platforms enable companies to propose tasks to a large crowd of users. The workers receive a compensation for their work according to the serious of the tasks they managed to accomplish. The evaluation of the quality of responses obtained from the crowd remains one of the most important problems in this context. Several methods have been proposed to estimate the expertise level of crowd workers. We propose an innovative measure of expertise assuming that we possess a dataset with an objective comparison of the items concerned. Our method is based on the definition of four factors with the theory of belief functions. We compare our method to the Fagin distance on a dataset from a real experiment, where users have to assess the quality of some audio recordings. Then, we propose to fuse both the Fagin distance and our expertise measure.
Crowdsourcing platforms enable to propose simple human intelligence tasks to a large number of participants who realise these tasks. The workers often receive a small amount of money or the platforms include some other incentive mechanisms, for example they can increase the workers reputation score, if they complete the tasks correctly. We address the problem of identifying experts among participants, that is, workers, who tend to answer the questions correctly. Knowing who are the reliable workers could improve the quality of knowledge one can extract from responses. As opposed to other works in the literature, we assume that participants can give partial or incomplete responses, in case they are not sure that their answers are correct. We model such partial or incomplete responses with the help of belief functions, and we derive a measure that characterizes the expertise level of each participant. This measure is based on precise and exactitude degrees that represent two parts of the expertise level. The precision degree reflects the reliability level of the participants and the exactitude degree reflects the knowledge level of the participants. We also analyze our model through simulation and demonstrate that our richer model can lead to more reliable identification of experts. Abstract We present a measure of performance (MOP) for identification algorithms based on the evidential theory of Dempster–Shafer. As an MOP, we introduce a principled distance between two basic probability assignments (BPAs) (or two bodies of evidence) based on a quantification of the similarity between sets. We give a geometrical interpretation of BPA and show that the proposed distance satisfies all the requirements for a metric. We also show the link with the quantification of Dempster's weight of conflict proposed by George and Pal. We compare this MOP to that described by Fixsen and Mahler and illustrate the behaviors of the two MOPs with numerical examples.
Abstract of query paper
Cite abstracts
919
918
Crowdsourcing platforms enable companies to propose tasks to a large crowd of users. The workers receive a compensation for their work according to the serious of the tasks they managed to accomplish. The evaluation of the quality of responses obtained from the crowd remains one of the most important problems in this context. Several methods have been proposed to estimate the expertise level of crowd workers. We propose an innovative measure of expertise assuming that we possess a dataset with an objective comparison of the items concerned. Our method is based on the definition of four factors with the theory of belief functions. We compare our method to the Fagin distance on a dataset from a real experiment, where users have to assess the quality of some audio recordings. Then, we propose to fuse both the Fagin distance and our expertise measure.
With the advent of crowdsourcing services it has become quite cheap and reasonably effective to get a data set labeled by multiple annotators in a short amount of time. Various methods have been proposed to estimate the consensus labels by correcting for the bias of annotators with different kinds of expertise. Since we do not have control over the quality of the annotators, very often the annotations can be dominated by spammers, defined as annotators who assign labels randomly without actually looking at the instance. Spammers can make the cost of acquiring labels very expensive and can potentially degrade the quality of the final consensus labels. In this paper we propose an empirical Bayesian algorithm called SpEMthat iteratively eliminates the spammers and estimates the consensus labels based only on the good annotators. The algorithm is motivated by defining a spammer score that can be used to rank the annotators. Experiments on simulated and real data show that the proposed approach is better than (or as good as) the earlier approaches in terms of the accuracy and uses a significantly smaller number of annotators. In remote sensing applications "ground-truth" data is often used as the basis for training pattern recognition algorithms to generate thematic maps or to detect objects of interest. In practical situations, experts may visually examine the images and provide a subjective noisy estimate of the truth. Calibrating the reliability and bias of expert labellers is a non-trivial problem. In this paper we discuss some of our recent work on this topic in the context of detecting small volcanoes in Magellan SAR images of Venus. Empirical results (using the Expectation-Maximization procedure) suggest that accounting for subjective noise can be quite significant in terms of quantifying both human and algorithm detection performance. For many supervised learning tasks it may be infeasible (or very expensive) to obtain objective and reliable labels. Instead, we can collect subjective (possibly noisy) labels from multiple experts or annotators. In practice, there is a substantial amount of disagreement among the annotators, and hence it is of great practical interest to address conventional supervised learning problems in this scenario. In this paper we describe a probabilistic approach for supervised learning when we have multiple annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline. The use of crowdsourcing platforms like Amazon Mechanical Turk for evaluating the relevance of search results has become an effective strategy that yields results quickly and inexpensively. One approach to ensure quality of worker judgments is to include an initial training period and subsequent sporadic insertion of predefined gold standard data (training data). Workers are notified or rejected when they err on the training data, and trust and quality ratings are adjusted accordingly. In this paper, we assess how this type of dynamic learning environment can affect the workers’ results in a search relevance evaluation task completed on Amazon Mechanical Turk. Specifically, we show how the distribution of training set answers impacts training of workers and aggregate quality of worker results. We conclude that in a relevance categorization task, a uniform distribution of labels across training data labels produces optimal peaks in 1) individual worker precision and 2) majority voting aggregate result accuracy.
Abstract of query paper
Cite abstracts
920
919
We revisit the complexity of deciding, given a bimatrix game, whether it has a Nash equilibrium with certain natural properties; such decision problems were early known to be @math -hard GZ89 . We show that @math -hardness still holds under two significant restrictions in simultaneity: the game is win-lose (that is, all utilities are @math or @math ) and symmetric . To address the former restriction, we design win-lose gadgets and a win-lose reduction; to accomodate the latter restriction, we employ and analyze the classical @math -symmetrization GHR63 in the win-lose setting. Thus, symmetric win-lose bimatrix games are as complex as general bimatrix games with respect to such decision problems. As a byproduct of our techniques, we derive hardness results for search, counting and parity problems about Nash equilibria in symmetric win-lose bimatrix games.
We investigate the complexity of finding Nash equilibria in which the strategy of each player is uniform on its support set. We show that, even for a restricted class of win-lose bimatrix games, deciding the existence of such uniform equilibria is an NP-complete problem. Our proof is graph-theoretical. Motivated by this result, we also give NP-completeness results for the problems of finding regular induced subgraphs of large size or regularity, which can be of independent interest. We give simple proofs of refinements of the complexity results of Gilboa and Zemel (1989), and we derive additional results of this sort. Our constructions employ imitation games, which are two person games in which both players have the same sets of pure strategies and the second player wishes to play the same pure strategy as the first player. We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of [2006a] on the complexity of four-player Nash equilibria, settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of two-player Nash equilibria. In particular, we prove the following theorems: —Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. —The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: —Arrow-Debreu market equilibria are PPAD-hard to compute. An imitation game is a finite two person normal form game in which the two players have the same set of pure strategies and the goal of the second player is to choose the same pure strategy as the first player. We explain how, in two different settings, observations obtained from imitation games complete a circle of ideas, showing that phenomena that had for many years seemed to be distinct are actually superficially different manifestations of a single structure. First, one can pass from a given two person finite game to an imitation game whose Nash equilibria are in one-to-one correspondence with the Nash equilibria of the given game. Second, each of the paths of the procedure described in Lemke (1965) for solving a linear complementarity problem is the projection of the path of the Lemke-Howson algorithm applied to an imitation game. We introduce two new natural decision problems, denoted as ? RATIONAL NASH and ? IRRATIONAL NASH, pertinent to the rationality and irrationality, respectively, of Nash equilibria for (finite) strategic games. These problems ask, given a strategic game, whether or not it admits (i) a rational Nash equilibrium where all probabilities are rational numbers, and (ii) an irrational Nash equilibrium where at least one probability is irrational, respectively. We are interested here in the complexities of ? RATIONAL NASH and ? IRRATIONAL NASH. Towards this end, we study two other decision problems, denoted as NASH-EQUIVALENCE and NASH-REDUCTION, pertinent to some mutual properties of the sets of Nash equilibria of two given strategic games with the same number of players. The problem NASH-EQUIVALENCE asks whether or not the two sets of Nash equilibria coincide; we identify a restriction of its complementary problem that witnesses ? RATIONAL NASH. The problem NASH-REDUCTION asks whether or not there is a so called Nash reduction: a suitable map between corresponding strategy sets of players that yields a Nash equilibrium of the former game from a Nash equilibrium of the latter game; we identify a restriction of NASH-REDUCTION that witnesses ? IRRATIONAL NASH. As our main result, we provide two distinct reductions to simultaneously show that (i) NASH-EQUIVALENCE is co- @math -hard and ? RATIONAL NASH is @math -hard, and (ii) NASH-REDUCTION and ? IRRATIONAL NASH are both @math -hard, respectively. The reductions significantly extend techniques previously employed by Conitzer and Sandholm (Proceedings of the 18th Joint Conference on Artificial Intelligence, pp. 765---771, 2003; Games Econ. Behav. 63(2), 621---641, 2008). This paper deals with the complexity of computing Nash and correlated equilibria for a finite game in normal form. We examine the problems of checking the existence of equilibria satisfying a certain condition, such as “Given a game G and a number r, is there a Nash (correlated) equilibrium of G in which all players obtain an expected payoff of at least r?” or “Is there a unique Nash (correlated) equilibrium in G?” etc. We show that such problems are typically “hard” (NP-hard) for Nash equilibria but “easy” (polynomial) for correlated equilibria. The efficient computation of Nash equilibria is one of the most formidable challenges in computational complexity today. The problem remains open for two-player games. We show that the complexity of two-player Nash equilibria is unchanged when all outcomes are restricted to be 0 or 1. That is, win-or-lose games are as complex as the general case for two-player games. Over the years, researchers have studied the complexity of several decision versions of Nash equilibrium in (symmetric) two-player games (bimatrix games). To the best of our knowledge, the last remaining open problem of this sort is the following; it was stated by Papadimitriou in 2007: find a non-symmetric Nash equilibrium (NE) in a symmetric game. We show that this problem is NP-complete and the problem of counting the number of non-symmetric NE in a symmetric game is #P-complete. We further our algorithmic and structural understanding of Nash equilibria. Specifically: We distill the hard core of the complexity of Nash equilibria, showing that even correctly computing a logarithmic number of bits of the equilibrium strategies of a two-player win-lose game is as hard as the general problem. We prove the following structural result about Nash equilibria: “the set of approximate equilibria of a zero-sum game is convex.” In 1951, John F. Nash proved that every game has a Nash equilibrium [Ann. of Math. (2), 54 (1951), pp. 286-295]. His proof is nonconstructive, relying on Brouwer's fixed point theorem, thus leaving open the questions, Is there a polynomial-time algorithm for computing Nash equilibria? And is this reliance on Brouwer inherent? Many algorithms have since been proposed for finding Nash equilibria, but none known to run in polynomial time. In 1991 the complexity class PPAD (polynomial parity arguments on directed graphs), for which Brouwer's problem is complete, was introduced [C. Papadimitriou, J. Comput. System Sci., 48 (1994), pp. 489-532], motivated largely by the classification problem for Nash equilibria; but whether the Nash problem is complete for this class remained open. In this paper we resolve these questions: We show that finding a Nash equilibrium in three-player games is indeed PPAD-complete; and we do so by a reduction from Brouwer's problem, thus establishing that the two problems are computationally equivalent. Our reduction simulates a (stylized) Brouwer function by a graphical game [M. Kearns, M. Littman, and S. Singh, Graphical model for game theory, in 17th Conference in Uncertainty in Artificial Intelligence (UAI), 2001], relying on “gadgets,” graphical games performing various arithmetic and logical operations. We then show how to simulate this graphical game by a three-player game, where each of the three players is essentially a color class in a coloring of the underlying graph. Subsequent work [X. Chen and X. Deng, Setting the complexity of 2-player Nash-equilibrium, in 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006] established, by improving our construction, that even two-player games are PPAD-complete; here we show that this result follows easily from our proof. We provide a single reduction that demonstrates that in normal-form games: (1) it is -complete to determine whether Nash equilibria with certain natural properties exist (these results are similar to those obtained by Gilboa and Zemel [Gilboa, I., Zemel, E., 1989. Nash and correlated equilibria: Some complexity considerations. Games Econ. Behav. 1, 80-93]), (2) more significantly, the problems of maximizing certain properties of a Nash equilibrium are inapproximable (unless ), and (3) it is -hard to count the Nash equilibria. We also show that determining whether a pure-strategy Bayes-Nash equilibrium exists in a Bayesian game is -complete, and that determining whether a pure-strategy Nash equilibrium exists in a Markov (stochastic) game is -hard even if the game is unobserved (and that this remains -hard if the game has finite length). All of our hardness results hold even if there are only two players and the game is symmetric. The computational complexity of finding a Nash equilibrium in a nonzero sum bimatrix game is an important open question. We put forward the notion of (0,1)-bimatrix games, and show that some associated computational problems are as hard as in the general case.
Abstract of query paper
Cite abstracts
921
920
The median of a graph @math is the set of all vertices @math of @math minimizing the sum of distances from @math to all other vertices of @math . It is known that computing the median of dense graphs in subcubic time refutes the APSP conjecture and computing the median of sparse graphs in subquadratic time refutes the HS conjecture. In this paper, we present a linear time algorithm for computing medians of median graphs, improving over the existing quadratic time algorithm. Median graphs constitute the principal class of graphs investigated in metric graph theory, due to their bijections with other discrete and geometric structures (CAT(0) cube complexes, domains of event structures, and solution sets of 2-SAT formulas). Our algorithm is based on the known majority rule characterization of medians in a median graph @math and on a fast computation of parallelism classes of edges ( @math -classes) of @math . The main technical contribution of the paper is a linear time algorithm for computing the @math -classes of a median graph @math using Lexicographic Breadth First Search (LexBFS). Namely, we show that any LexBFS ordering of the vertices of a median graph @math has the following : the fathers of any two adjacent vertices of @math are also adjacent. Using the fast computation of the @math -classes of a median graph @math , we also compute the Wiener index (total distance) of @math in linear time.
Abstract Motivated by a dynamic location problem for graphs, Chung, Graham and Saks introduced a graph parameter called windex. Graphs of windex 2 turned out to be, in graph-theoretic language, retracts of hypercubes. These graphs are also known as median graphs and can be characterized as partial binary Hamming graphs satisfying a convexity condition. In this paper an O(n 3 2 log n) algorithm is presented to recognize these graphs. As a by-product we are also able to isometrically embed median graphs in hypercubes in O ( m log n ) time. From the Publisher: This study in combinatorial group theory introduces the concept of automatic groups. It contains a succinct introduction to the theory of regular languages, a discussion of related topics in combinatorial group theory, and the connections between automatic groups and geometry which motivated the development of this new theory. It is of interest to mathematicians and computer scientists and includes open problems that will dominate the research for years to come. We show how to test whether a graph with n vertices and m edges is a partial cube, and if so how to find a distance-preserving embedding of the graph into a hypercube, in the near-optimal time bound O(n2), improving previous O(nm)-time solutions. In this note, we characterize the graphs (1-skeletons) of some piecewise Euclidean simplicial and cubical complexes having nonpositive curvature in the sense of Gromov's CAT(0) inequality. Each such cell complex K is simply connected and obeys a certain flag condition. It turns out that if, in addition, all maximal cells are either regular Euclidean cubes or right Euclidean triangles glued in a special way, then the underlying graph G(K) is either a median graph or a hereditary modular graph without two forbidden induced subgraphs. We also characterize the simplicial complexes arising from bridged graphs, a class of graphs whose metric enjoys one of the basic properties of CAT(0) spaces. Additionally, we show that the graphs of all these complexes and some more general classes of graphs have geodesic combings and bicombings verifying the 1- or 2-fellow traveler property.
Abstract of query paper
Cite abstracts
922
921
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50 accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements. We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry. In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.
Abstract of query paper
Cite abstracts
923
922
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50 accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
Side-channel attacks are easy-to-implement whilst powerful attacks against cryptographic implementations, and their targets range from primitives, protocols, modules, and devices to even systems. These attacks pose a serious threat to the security of cryptographic modules. In consequence, cryptographic implementations have to be evaluated for their resistivity against such attacks and the incorporation of different countermeasures has to be considered. This paper surveys the methods and techniques employed in these attacks, the destructive effects of such attacks, the countermeasures against such attacks and evaluation of their feasibility and applicability. Finally, the necessity and feasibility of adopting this kind of physical security testing and evaluation in the development of FIPS 140-3 standard are explored. This paper is not only a survey paper, but also more a position paper. Cryptosystem designers frequently assume that secrets will be manipulated in closed, reliable computing environments. Unfortunately, actual computers and microchips leak information about the operations they process. This paper examines specific methods for analyzing power consumption measurements to find secret keys from tamper resistant devices. We also discuss approaches for building cryptosystems that can operate securely in existing hardware that leaks information. Sharing memory pages between non-trusting processes is a common method of reducing the memory footprint of multi-tenanted systems. In this paper we demonstrate that, due to a weakness in the Intel X86 processors, page sharing exposes processes to information leaks. We present FLUSH+RELOAD, a cache side-channel attack technique that exploits this weakness to monitor access to memory lines in shared pages. Unlike previous cache side-channel attacks, FLUSH+RELOAD targets the Last-Level Cache (i.e. L3 on processors with three cache levels). Consequently, the attack program and the victim do not need to share the execution core. We demonstrate the efficacy of the FLUSH+RELOAD attack by using it to extract the private encryption keys from a victim program running GnuPG 1.4.13. We tested the attack both between two unrelated processes in a single operating system and between processes running in separate virtual machines. On average, the attack is able to recover 96.7 of the bits of the secret key by observing a single signature or decryption round. The paper presents a digital VLSI design flow to create secure, side-channel attack (SCA) resistant integrated circuits. The design flow starts from a normal design in a hardware description language, such as VHDL or Verilog, and provides a direct path to an SCA resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. We discuss the basis for side-channel attack resistance and adjust the library databases and constraints files of the synthesis and place-and-route procedures accordingly. Experimental results show that a DPA (differential power analysis) attack on a regular single ended CMOS standard cell implementation of a module of the DES algorithm discloses the secret key after 200 measurements. The same attack on a secure version still does not disclose the secret key after more than 2000 measurements. Side-channel attacks are an increasingly important concern for the security of cryptographic embedded devices, such as the SIM cards used in mobile phones. Previous works have exhibited such attacks against implementations of the 2G GSM algorithms COMP-128, A5. In this paper, we show that they remain an important issue for USIM cards implementing the AES-based MILENAGE algorithm used in 3G 4G communications. In particular, we analyze instances of cards from a variety of operators and manufacturers, and describe successful Differential Power Analysis attacks that recover encryption keys and other secrets needed to clone the USIM cards within a few minutes. Further, we discuss the impact of the operator-defined secret parameters in MILENAGE on the difficulty to perform Differential Power Analysis, and show that they do not improve implementation security. Our results back up the observation that physical security issues raise long-term challenges that should be solved early in the development of cryptographic implementations, with adequatei?źcountermeasures.
Abstract of query paper
Cite abstracts
924
923
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50 accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
Deep learning has become the de-facto computational paradigm for various kinds of perception problems, including many privacy-sensitive applications such as online medical image analysis. No doubt to say, the data privacy of these deep learning systems is a serious concern. Different from previous research focusing on exploiting privacy leakage from deep learning models, in this paper, we present the first attack on the implementation of deep learning models. To be specific, we perform the attack on an FPGA-based convolutional neural network accelerator and we manage to recover the input image from the collected power traces without knowing the detailed parameters in the neural network by utilizing the characteristics of the "line buffer" performing convolution in the CNN accelerators. For the MNIST dataset, our power side-channel attack is able to achieve up to 89 recognition accuracy. Deep learning is gaining importance in many applications. However, Neural Networks face several security and privacy threats. This is particularly significant in the scenario where Cloud infrastructures deploy a service with Neural Network model at the back end. Here, an adversary can extract the Neural Network parameters, infer the regularization hyperparameter, identify if a data point was part of the training data, and generate effective transferable adversarial examples to evade classifiers. This paper shows how a Neural Network model is susceptible to timing side channel attack. In this paper, a black box Neural Network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network. Although, constructing an equivalent architecture is a complex search problem, it is shown how Reinforcement Learning with knowledge distillation can effectively reduce the search space to infer a target model. The proposed approach has been tested with VGG architectures on CIFAR10 data set. It is observed that it is possible to reconstruct substitute models with test accuracy close to the target models and the proposed approach is scalable and independent of type of Neural Network architectures.
Abstract of query paper
Cite abstracts
925
924
Conventional object detection methods essentially suppose that the training and testing data are collected from a restricted target domain with expensive labeling cost. For alleviating the problem of domain dependency and cumbersome labeling, this paper proposes to detect objects in an unrestricted environment by leveraging domain knowledge trained from an auxiliary source domain with sufficient labels. Specifically, we propose a multi-adversarial Faster-RCNN (MAF) framework for unrestricted object detection, which inherently addresses domain disparity minimization for domain adaptation in feature representation. The paper merits are in three-fold: 1) With the idea that object detectors often becomes domain incompatible when image distribution resulted domain disparity appears, we propose a hierarchical domain feature alignment module, in which multiple adversarial domain classifier submodules for layer-wise domain feature confusion are designed; 2) An information invariant scale reduction module (SRM) for hierarchical feature map resizing is proposed for promoting the training efficiency of adversarial domain adaptation; 3) In order to improve the domain adaptability, the aggregated proposal features with detection results are feed into a proposed weighted gradient reversal layer (WGRL) for characterizing hard confused domain samples. We evaluate our MAF on unrestricted tasks, including Cityscapes, KITTI, Sim10k, etc. and the experiments show the state-of-the-art performance over the existing detectors.
Cascade is a widely used approach that rejects obvious negative samples at early stages for learning better classifier and faster inference. This paper presents chained cascade network (CC-Net). In this CC-Net, there are many cascade stages. Preceding cascade stages are placed at shallow layers. Easy hard examples are rejected at shallow layers so that the computation for deeper or wider layers is not required. In this way, features and classifiers at latter stages handle more difficult samples with the help of features and classifiers in previous stages. It yields consistent boost in detection performance on PASCAL VOC 2007 and ImageNet for both fast RCNN and Faster RCNN. CC-Net saves computation for both training and testing. Code is available on https: github.com wk910930 ccnn. Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures). Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset. State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry. For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression accuracy and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multitask loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https: github.com sfzhang15 RefineDet. Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn. We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off-the-shelf networks pre-trained on large-scale classification datasets like ImageNet, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1 2 parameters to SSD and 1 10 parameters to Faster RCNN. Our code and models are available at: this https URL . The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron. We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function. In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn.
Abstract of query paper
Cite abstracts
926
925
This paper presents a novel clustering concept that is based on jointly learned nonlinear transforms (NTs) with priors on the information loss and the discrimination. We introduce a clustering principle that is based on evaluation of a parametric min-max measure for the discriminative prior. The decomposition of the prior measure allows to break down the assignment into two steps. In the first step, we apply NTs to a data point in order to produce candidate NT representations. In the second step, we preform the actual assignment by evaluating the parametric measure over the candidate NT representations. Numerical experiments on image clustering task validate the potential of the proposed approach. The evaluation shows advantages in comparison to the state-of-the-art clustering methods.
We introduce a measure of how well a combinatorial graph fits a collection of vectors. The optimal graphs under this measure may be computed by solving convex quadratic programs and have many interesting properties. For vectors in d dimensional space, the graphs always have average degree at most 2(d + 1), and for vectors in 2 dimensions they are always planar. We compute these graphs for many standard data sets and show that they can be used to obtain good solutions to classification, regression and clustering problems. Kernel k-means and spectral clustering have both been used to identify clusters that are non-linearly separable in input space. Despite significant research, these methods have remained only loosely related. In this paper, we give an explicit theoretical connection between them. We show the generality of the weighted kernel k-means objective function, and derive the spectral clustering objective of normalized cut as a special case. Given a positive definite similarity matrix, our results lead to a novel weighted kernel k-means algorithm that monotonically decreases the normalized cut. This has important implications: a) eigenvector-based algorithms, which can be computationally prohibitive, are not essential for minimizing normalized cuts, b) various techniques, such as local search and acceleration schemes, may be used to improve the quality as well as speed of kernel k-means. Finally, we present results on several interesting data sets, including diametrical clustering of large gene-expression matrices and a handwriting recognition data set. 1. Origins, Purposes, Limitations And The Data Bank 2. A Geometrical Approach To Factor Analysis 3. The Orthogonal Extraction Of Factors 4. Rotating And Interpreting Factors 5. Confirmatory Factor Analysis And Cluster Analysis 6. Some Applications Appendix A Considerations When Carrying Out A Factor Analysis Appendix B Matrix Algebra Appendix C Finding Factors Using The Centroid Method Appendix D Significance Levels For Pearson Product-Moment Correlation Coefficients References Index. Non-negative matrix factorization (NMF) is a recently developed technique for finding parts-based, linear representations of non-negative data. Although it has successfully been applied in several applications, it does not always result in parts-based representations. In this paper, we show how explicitly incorporating the notion of 'sparseness' improves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF and for our extension. Our hope is that this will further the application of these methods to solving novel data-analysis problems. Recently many scientific and engineering applications have involved the challenging task of analyzing large amounts of unsorted high-dimensional data that have very complicated structures. From both geometric and statistical points of view, such unsorted data are considered mixed as different parts of the data have significantly different structures which cannot be described by a single model. In this paper we propose to use subspace arrangements—a union of multiple subspaces—for modeling mixed data: each subspace in the arrangement is used to model just a homogeneous subset of the data. Thus, multiple subspaces together can capture the heterogeneous structures within the data set. In this paper, we give a comprehensive introduction to a new approach for the estimation of subspace arrangements. This is known as generalized principal component analysis (GPCA). In particular, we provide a comprehensive summary of important algebraic properties and statistical facts that are crucial for making the inference of subspace arrangements both efficient and robust, even when the given data are corrupted by noise or contaminated with outliers. This new method in many ways improves and generalizes extant methods for modeling or clustering mixed data. There have been successful applications of this new method to many real-world problems in computer vision, image processing, and system identification. In this paper, we will examine several of those representative applications. This paper is intended to be expository in nature. However, in order that this may serve as a more complete reference for both theoreticians and practitioners, we take the liberty of filling in several gaps between the theory and the practice in the existing literature. We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods. Digital data explosion mandates the development of scalable tools to organize the data in a meaningful and easily accessible form. Clustering is a commonly used tool for data organization. However, many clustering algorithms designed to handle large data sets assume linear separability of data and hence do not perform well on real world data sets. While kernel-based clustering algorithms can capture the non-linear structure in data, they do not scale well in terms of speed and memory requirements when the number of objects to be clustered exceeds tens of thousands. We propose an approximation scheme for kernel k-means, termed approximate kernel k-means, that reduces both the computational complexity and the memory requirements by employing a randomized approach. We show both analytically and empirically that the performance of approximate kernel k-means is similar to that of the kernel k-means algorithm, but with dramatically reduced run-time complexity and memory requirements.
Abstract of query paper
Cite abstracts
927
926
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
This paper presents a series of new latent semantic models based on a convolutional neural network (CNN) to learn low-dimensional semantic vectors for search queries and Web documents. By using the convolution-max pooling operation, local contextual information at the word n-gram level is modeled first. Then, salient local fea-tures in a word sequence are combined to form a global feature vector. Finally, the high-level semantic information of the word sequence is extracted to form a global vector representation. The proposed models are trained on clickthrough data by maximizing the conditional likelihood of clicked documents given a query, us-ing stochastic gradient ascent. The new models are evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that our model significantly outperforms other se-mantic models, which were state-of-the-art in retrieval performance prior to this work. We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or 'duet' performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks. Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3 absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers. A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising. Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR. As an alternative to question answering methods based on feature engineering, deep learning approaches such as convolutional neural networks (CNNs) and Long Short-Term Memory Models (LSTMs) have recently been proposed for semantic matching of questions and answers. To achieve good results, however, these models have been combined with additional features such as word overlap or BM25 scores. Without this combination, these models perform significantly worse than methods based on linguistic feature engineering. In this paper, we propose an attention based neural matching model for ranking short answer text. We adopt value-shared weighting scheme instead of position-shared weighting scheme for combining different matching signals and incorporate question term importance learning using question attention network. Using the popular benchmark TREC QA data, we show that the relatively simple aNMM model can significantly outperform other neural network models that have been used for the question answering task, and is competitive with models that are combined with additional features. When aNMM is combined with additional features, it outperforms all baselines. Matching natural language sentences is central for many applications such as information retrieval and question answering. Existing deep models rely on a single sentence representation or multiple granularity representations for matching. However, such methods cannot well capture the contextualized local information in the matching process. To tackle this problem, we present a new deep architecture to match two sentences with multiple positional sentence representations. Specifically, each positional sentence representation is a sentence representation at this position, generated by a bidirectional long short term memory (Bi-LSTM). The matching score is finally produced by aggregating interactions between these different positional sentence representations, through k-Max pooling and a multi-layer perceptron. Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model. Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper. In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models. Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank). Semantic matching is of central importance to many natural language tasks [2,28]. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models. In recent years, deep neural networks have led to exciting breakthroughs in speech recognition, computer vision, and natural language processing (NLP) tasks. However, there have been few positive results of deep models on ad-hoc retrieval tasks. This is partially due to the fact that many important characteristics of the ad-hoc retrieval task have not been well addressed in deep models yet. Typically, the ad-hoc retrieval task is formalized as a matching problem between two pieces of text in existing work using deep models, and treated equivalent to many NLP tasks such as paraphrase identification, question answering and automatic conversation. However, we argue that the ad-hoc retrieval task is mainly about relevance matching while most NLP matching tasks concern semantic matching, and there are some fundamental differences between these two matching tasks. Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements. In this paper, we propose a novel deep relevance matching model (DRMM) for ad-hoc retrieval. Specifically, our model employs a joint deep architecture at the query term level for relevance matching. By using matching histogram mapping, a feed forward matching network, and a term gating network, we can effectively deal with the three relevance matching factors mentioned above. Experimental results on two representative benchmark collections show that our model can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models. How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https: github.com yinwenpeng Answer_Selection. In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task. This paper presents two new document ranking models for Web search based upon the methods of semantic representation and the statistical translation-based approach to information retrieval (IR). Assuming that a query is parallel to the titles of the documents clicked on for that query, large amounts of query-title pairs are constructed from clickthrough data; two latent semantic models are learned from this data. One is a bilingual topic model within the language modeling framework. It ranks documents for a query by the likelihood of the query being a semantics-based translation of the documents. The semantic representation is language independent and learned from query-title pairs, with the assumption that a query and its paired titles share the same distribution over semantic topics. The other is a discriminative projection model within the vector space modeling framework. Unlike Latent Semantic Analysis and its variants, the projection matrix in our model, which is used to map from term vectors into sematic space, is learned discriminatively such that the distance between a query and its paired title, both represented as vectors in the projected semantic space, is smaller than that between the query and the titles of other documents which have no clicks for that query. These models are evaluated on the Web search task using a real world data set. Results show that they significantly outperform their corresponding baseline models, which are state-of-the-art. Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two-mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments. An “Interestingness Modeler” uses deep neural networks to learn deep semantic models (DSM) of “interestingness.” The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a “context” and optional “focus” of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding “interesting” targets in that space. The resulting interestingness model has applicable uses, including, but not limited to, contextual entity searches, automatic text highlighting, prefetching documents of likely interest, automated content recommendation, automated advertisement placement, etc. Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models.
Abstract of query paper
Cite abstracts
928
927
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
Manually crafted combinatorial features have been the "secret sauce" behind many successful models. For web-scale applications, however, the variety and volume of features make these manually crafted features expensive to create, maintain, and deploy. This paper proposes the Deep Crossing model which is a deep neural network that automatically combines features to produce superior models. The input of Deep Crossing is a set of individual features that can be either dense or sparse. The important crossing features are discovered implicitly by the networks, which are comprised of an embedding and stacking layer, as well as a cascade of Residual Units. Deep Crossing is implemented with a modeling tool called the Computational Network Tool Kit (CNTK), powered by a multi-GPU platform. It was able to build, from scratch, two web-scale models for a major paid search engine, and achieve superior results with only a sub-set of the features used in the production models. This demonstrates the potential of using Deep Crossing as a general modeling paradigm to improve existing products, as well as to speed up the development of new models with a fraction of the investment in feature engineering and acquisition of deep domain knowledge. This paper presents a series of new latent semantic models based on a convolutional neural network (CNN) to learn low-dimensional semantic vectors for search queries and Web documents. By using the convolution-max pooling operation, local contextual information at the word n-gram level is modeled first. Then, salient local fea-tures in a word sequence are combined to form a global feature vector. Finally, the high-level semantic information of the word sequence is extracted to form a global vector representation. The proposed models are trained on clickthrough data by maximizing the conditional likelihood of clicked documents given a query, us-ing stochastic gradient ascent. The new models are evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that our model significantly outperforms other se-mantic models, which were state-of-the-art in retrieval performance prior to this work. Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.
Abstract of query paper
Cite abstracts
929
928
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification, and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks. Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.
Abstract of query paper
Cite abstracts
930
929
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards “small”. Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4 (97.6 top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance.
Abstract of query paper
Cite abstracts
931
930
This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius @math centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks.
In this article device-to-device (D2D) communication underlaying a 3GPP LTE-Advanced cellular network is studied as an enabler of local services with limited interference impact on the primary cellular network. The approach of the study is a tight integration of D2D communication into an LTE-Advanced network. In particular, we propose mechanisms for D2D communication session setup and management involving procedures in the LTE System Architecture Evolution. Moreover, we present numerical results based on system simulations in an interference limited local area scenario. Our results show that D2D communication can increase the total throughput observed in the cell area. There has been significant interest and progress in the field of vehicular ad hoc networks over the last several years. VANETs comprise vehicle-to-vehicle and vehicle-to-infrastructure communications based on wireless local area network technologies. The distinctive set of candidate applications (e.g., collision warning and local traffic information for drivers), resources (licensed spectrum, rechargeable power source), and the environment (e.g., vehicular traffic flow patterns, privacy concerns) make the VANET a unique area of wireless communication. This article gives an overview of the field, providing motivations, challenges, and a snapshot of proposed solutions. In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes. We present a new architecture to handle the ongoing explosive increase in the demand for video content in wireless networks. It is based on distributed caching of the content in femtobasestations with small or non-existing backhaul capacity but with considerable storage space, called helper nodes. We also consider using the wireless terminals themselves as caching helpers, which can distribute video through device-todevice communications. This approach allows an improvement in the video throughput without deployment of any additional infrastructure. The new architecture can improve video throughput by one to two orders-of-magnitude. Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue. Device-to-device communication is likely to be added to LTE in 3GPP Release 12. In principle, exploiting direct communication between nearby mobile devices will improve spectrum utilization, overall throughput, and energy consumption, while enabling new peer-to-peer and location-based applications and services. D2D-enabled LTE devices can also become competitive for fallback public safety networks, which must function when cellular networks are not available or fail. Introducing D2D poses many challenges and risks to the long-standing cellular architecture, which is centered around the base station. We provide an overview of D2D standardization activities in 3GPP, identify outstanding technical challenges, draw lessons from initial evaluation studies, and summarize "best practices" in the design of a D2D-enabled air interface for LTE-based cellular networks Spectrum sharing between wireless networks improves the efficiency of spectrum usage, and thereby alleviates spectrum scarcity due to growing demands for wireless broadband access. To improve the usual underutilization of the cellular uplink spectrum, this paper addresses spectrum sharing between a cellular uplink and a mobile ad hoc networks. These networks access either all frequency subchannels or their disjoint subsets, called spectrum underlay and spectrum overlay, respectively. Given these spectrum sharing methods, the capacity trade-off between the coexisting networks is analyzed based on the transmission capacity of a network with Poisson distributed transmitters. This metric is defined as the maximum density of transmitters subject to an outage constraint for a given signal-to-interference ratio (SIR). Using tools from stochastic geometry, the transmission-capacity trade-off between the coexisting networks is analyzed, where both spectrum overlay and underlay as well as successive interference cancellation (SIC) are considered. In particular, for small target outage probability, the transmission capacities of the coexisting networks are proved to satisfy a linear equation, whose coefficients depend on the spectrum sharing method and whether SIC is applied. This linear equation shows that spectrum overlay is more efficient than spectrum underlay. Furthermore, this result also provides insight into the effects of network parameters on transmission capacities, including link diversity gains, transmission distances, and the base station density. In particular, SIC is shown to increase the transmission capacities of both coexisting networks by a linear factor, which depends on the interference-power threshold for qualifying canceled interferers. While operators have finally started to deploy fourth generation broadband technology, many believe it will still be insufficient to meet the anticipated demand in mobile traffic over the coming years. Generally, the natural way to cope with traffic acceleration is to reduce cell size, and this can be done in many ways. The most obvious method is via picocells, but this requires additional CAPEX and OPEX investment to install and manage these new base stations. Another approach, which avoids this additional CAPEX OPEX, involves offloading cellular traffic onto direct D2D connections whenever the users involved are in proximity. Given that most client devices are capable of establishing concurrent cellular and WiFi connections today, we expect the majority of immediate gains from this approach to come from the use of the unlicensed bands. Interference is a main limiting factor of the performance of a wireless ad hoc network. The temporal and the spatial correlation of the interference makes the outages correlated temporally (important for retransmissions) and spatially correlated (important for routing). In this letter we quantify the temporal and spatial correlation of the interference in a wireless ad hoc network whose nodes are distributed as a Poisson point process on the plane when ALOHA is used as the multiple-access scheme. The performance of wireless networks depends critically on their spatial configuration, because received signal power and interference depend critically on the distances between numerous transmitters and receivers. This is particularly true in emerging network paradigms that may include femtocells, hotspots, relays, white space harvesters, and meshing approaches, which are often overlaid with traditional cellular networks. These heterogeneous approaches to providing high-capacity network access are characterized by randomly located nodes, irregularly deployed infrastructure, and uncertain spatial configurations due to factors like mobility and unplanned user-installed access points. This major shift is just beginning, and it requires new design approaches that are robust to spatial randomness, just as wireless links have long been designed to be robust to fading. The objective of this article is to illustrate the power of spatial models and analytical techniques in the design of wireless networks, and to provide an entry-level tutorial. Spatial Aloha is probably the simplest medium access protocol to be used in a large mobile ad hoc network: each station tosses a coin independently of everything else and accesses the channel if it gets heads. In a network where stations are randomly and homogeneously located in the Euclidean plane, there is a way to tune the bias of the coin so as to obtain the best possible compromise between spatial reuse and per transmitter throughput. This paper shows how to address this questions using stochastic geometry and more precisely Poisson shot noise field theory. The theory that is developed is fully computational and leads to new closed form expressions for various kinds of spatial averages (like e.g. outage, throughput or transport). It also allows one to derive general scaling laws that hold for general fading assumptions. We exemplify its flexibility by analyzing a natural variant of Spatial Aloha that we call Opportunistic Aloha and that consists in replacing the coin tossing by an evaluation of the quality of the channel of each station to its receiver and a selection of the stations with good channels (e.g. fading) conditions. We show how to adapt the general machinery to this variant and how to optimize and implement it. We show that when properly tuned, Opportunistic Aloha very significantly outperforms Spatial Aloha, with e.g. a mean throughput per unit area twice higher for Rayleigh fading scenarios with typical parameters. An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density. In cellular networks, proximity users may communicate directly without going through the base station, which is called Device-to-device (D2D) communications and it can improve spectral efficiency. However, D2D communications may generate interference to the existing cellular networks if not designed properly. In this paper, we study a resource allocation problem to maximize the overall network throughput while guaranteeing the quality-of-service (QoS) requirements for both D2D users and regular cellular users (CUs). A three-step scheme is proposed. It first performs admission control and then allocates powers for each admissible D2D pair and its potential CU partners. Next, a maximum weight bipartite matching based scheme is developed to select a suitable CU partner for each admissible D2D pair to maximize the overall network throughput. Numerical results show that the proposed scheme can significantly improve the performance of the hybrid system in terms of D2D access rate and the overall network throughput. The performance of D2D communications depends on D2D user locations, cell radius, the numbers of active CUs and D2D pairs, and the maximum power constraint for the D2D pairs.
Abstract of query paper
Cite abstracts
932
931
This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius @math centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks.
This paper analyzes an emerging architecture of cellular network utilizing both planar base stations uniformly distributed in the Euclidean plane and base stations located on roads. An example of this architecture is that where, in addition to conventional planar cellular base stations and users, vehicles also play the role of both base stations and users. A Poisson line process is used to model the road network and, conditionally on the lines, linear Poisson point processes are used to model the vehicles on the roads. The conventional planar base stations and users are modeled by the independent planar Poisson point processes. We use Palm calculus to investigate the statistical properties of a typical user in such a network. Specifically, this paper discusses two different Palm distributions, with respect to the user point processes depending on its type: planar or vehicular. We derive the distance to the nearest base station, the association of the typical users, and the coverage probability of the typical user. Furthermore, we provide a comprehensive characterization of coverage of all possible cellular transmissions in this setting, namely, vehicle-to-vehicle, vehicle-to-infrastructure, infrastructure-to-vehicle, and infrastructure-to-infrastructure. In this paper, we introduce a new population model. Taking the geometry of cities into account by adding roads, we build a Cox process driven by a Poisson line tessellation. We perform several shot-noise computations according to various generalizations of our original process. This allows us to derive analytical formulas for the uplink coverage probability in each case. Tessellations are subdivisions of d-dimensional space into non-overlapping "cells". Voronoi tessellations are produced by first considering a set of points (known as nuclei) in d-space, and then defining cells as the set of points which are closest to each nuclei. A random Voronoi tessellation is produced by supposing that the location of each nuclei is determined by some random process. They provide models for many natural phenomena as diverse as the growth of crystals, the territories of animals, the development of regional market areas, and in subjects such as computational geometry and astrophysics. This volume provides an introduction to random Voronoi tessellations by presenting a survey of the main known results and the directions in which research is proceeding. Throughout the volume, mathematical and rigorous proofs are given, making this essentially a self-contained account in which no background knowledge of the subject is assumed. In this paper, we consider a vehicular network in which the wireless nodes are located on a system of roads. We model the roadways, which are predominantly straight and randomly oriented, by a Poisson line process (PLP) and the locations of nodes on each road as a homogeneous 1D Poisson point process. Assuming that each node transmits independently, the locations of transmitting and receiving nodes are given by two Cox processes driven by the same PLP. For this setup, we derive the coverage probability of a typical receiver, which is an arbitrarily chosen receiving node, assuming independent Nakagami- @math fading over all wireless channels. Assuming that the typical receiver connects to its closest transmitting node in the network, we first derive the distribution of the distance between the typical receiver and the serving node to characterize the desired signal power. We then characterize coverage probability for this setup, which involves two key technical challenges. First, we need to handle several cases as the serving node can possibly be located on any line in the network and the corresponding interference experienced at the typical receiver is different in each case. Second, conditioning on the serving node imposes constraints on the spatial configuration of lines, which requires careful analysis of the conditional distribution of the lines. We address these challenges in order to characterize the interference experienced at the typical receiver. We then derive an exact expression for coverage probability in terms of the derivative of Laplace transform of interference power distribution. We analyze the trends in coverage probability as a function of the network parameters: line density and node density. We also provide some theoretical insights by studying the asymptotic characteristics of coverage probability.
Abstract of query paper
Cite abstracts
933
932
This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius @math centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks.
The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying. This paper presents and analyzes an architecture to collect sensor data in sparse sensor networks. Our approach exploits the presence of mobile entities (called MULEs) present in the environment. MULEs pick up data from the sensors when in close range, buffer it, and drop off the data to wired access points. This can lead to substantial power savings at the sensors as they only have to transmit over a short range. This paper focuses on a simple analytical model for understanding performance as system parameters are scaled. Our model assumes two-dimensional random walk for mobility and incorporates key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points) and the required buffer capacities on the sensors and the MULEs. The modeling along with simulation results can be used for further analysis and provide certain guidelines for deployment of such systems. Ad hoc networks formed by traveling vehicles are envisaged to become a common platform that will support a wide variety of applications, ranging from road safety to advertising and entertainment. The multitude of vehicular applications calls for routing schemes that satisfy user-defined delay requirements while at the same time maintaining a low level of channel utilization to allow their coexistence. This paper focuses on the development of carry-and-forward schemes that attempt to deliver data from vehicles to fixed infrastructure nodes in an urban setting. The proposed algorithms leverage local or global knowledge of traffic statistics to carefully alternate between the Data Muling and Multihop Forwarding strategies, in order to minimize communication overhead while adhering to delay constraints imposed by the application. We provide an extensive evaluation of our schemes using realistic vehicular traces on a real city map. Intermittently connected mobile networks are sparse wireless networks where most of the time there does not exist a complete path from the source to the destination. These networks fall into the general category of Delay Tolerant Networks. There are many real networks that follow this paradigm, for example, wildlife tracking sensor networks, military networks, inter-planetary networks, etc. In this context, conventional routing schemes would fail.To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. Furthermore, proposed efforts to significantly reduce the overhead of flooding-based schemes have often be plagued by large delays. With this in mind, we introduce a new routing scheme, called Spray and Wait, that "sprays" a number of copies into the network, and then "waits" till one of these nodes meets the destination.Using theory and simulations we show that Spray and Wait outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered; its overall performance is close to the optimal scheme. Furthermore, it is highly scalable retaining good performance under a large range of scenarios, unlike other schemes. Finally, it is simple to implement and to optimize in order to achieve given performance goals in practice. Mathematical Foundation. Point Processes I--The Poisson Point Process. Random Closed Sets I--The Boolean Model. Point Processes II--General Theory. Point Processes III--Construction of Models. Random Closed Sets II--The General Case. Random Measures. Random Processes of Geometrical Objects. Fibre and Surface Processes. Random Tessellations. Stereology. References. Indexes. In this paper, we study a novel forwarding technique based on geographical location of the nodes involved and random selection of the relaying node via contention among receivers. We provide a detailed description of a MAC scheme based on these concepts and on collision avoidance and report on its energy and latency performance. A simplified analysis is given first, some relevant trade offs are highlighted, and parameter optimization is pursued. Further, a semi-Markov model is developed which provides a more accurate performance evaluation. Simulation results supporting the validity of our analytical approach are also provided. As technology rapidly progresses, more devices will combine both communication and mobility capabilities. With mobility in devices, we envision a new class of proactive networks that are able to adapt themselves, via physical movement, to meet the needs of applications. To fully realize these opportunities, effective control of device mobility and the interaction between devices is needed. In this paper, we consider the message ferrying (MF) scheme which exploits controlled mobility to transport data in delay-tolerant networks, where end-to-end paths may not exist between nodes. In the MF scheme, a set of special mobile nodes called message ferries are responsible for carrying data for nodes in the network. We study the use of multiple ferries in such networks, which may be necessary to address performance and robustness concerns. We focus on the design of ferry routes. With the possibilities of interaction between ferries, the route design problem is challenging. We present algorithms to calculate routes such that the traffic demand is met and the data delivery delay is minimized. We evaluate these algorithms under a variety of network conditions via simulations. Our goal is to guide the design of MF systems and understand the tradeoff between the incurred cost of multiple ferries and the improved performance. We show that the performance scales well with the number of ferries in terms of throughput, delay and resource requirements in both ferries and nodes. We review the rationale behind the current design of the Delay Disruption Tolerant Networking (DTN) Architecture and highlight some remaining open issues. Its evolution, from a focus on deep space to a broader class of heterogeneous networks that may suffer disruptions, affected design decisions spanning naming and addressing, message formats, data encoding methods, routing, congestion management and security. Having now achieved relative stability with the design, additional experience is required in long-running operational environments in order to fine tune our understanding of DTN concepts and the types of capabilities that are worth the investment in implementation complexity. We expect key management, handling of congestion, multicasting capability, and routing to remain active areas of research and development, and that DTN may continue to be an active research endeavor for at least the next few years. This paper provides an introductory overview of Vehicular Delay-Tolerant Networks. First, an introduction to Delay-Tolerant Networks and Vehicular Delay-Tolerant Networks is given. Delay-Tolerant schemes and protocols can help in situations where network connectivity is sparse or with large variations in density, or even when there is no end-to-end connectivity by providing a communications solution for non real-time applications. Some special issues like routing are addressed in the paper and an introductory description of applications and the most important projects is given. Finally, some research challenges are discussed and conclusions are detailed.
Abstract of query paper
Cite abstracts
934
933
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
Many NLP tools for English and German are based on manually annotated articles from the Wall Street Journal and Frankfurter Rundschau. The average readers of these two newspapers are middle-aged (55 and 47 years old, respectively), and the annotated articles are more than 20 years old by now. This leads us to speculate whether tools induced from these resources (such as part-of-speech taggers) put older language users at an advantage. We show that this is actually the case in both languages, and that the cause goes beyond simple vocabulary differences. In our experiments, we control for gender and region. Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societ al goals for equality of representation in this space. Televised role portrayals and interracial interactions, as sources of vicarious experience, contribute to the development of stereotypes, prejudice, and discrimination among children. The first section of this article reviews the amount and nature of racial ethnic content on television, including limited portrayals of racial ethnic groups and of interracial ethnic interaction. The second section focuses on theoretical models that help explain television's role in the development, maintenance, and modification of stereotypes, prejudice, and discrimination. The third section addresses research on the effects of television in altering stereotypes, prejudice, and discrimination, with particular attention given to media intervention programs specifically designed to address these issues (Sesame Street and Different and the Same). This article concludes with a discussion of suggestions for future research.
Abstract of query paper
Cite abstracts
935
934
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
Abstract of query paper
Cite abstracts
936
935
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance. In this paper, we propose a generalized Laplacian of Gaussian (LoG) (gLoG) filter for detecting general elliptical blob structures in images. The gLoG filter can not only accurately locate the blob centers but also estimate the scales, shapes, and orientations of the detected blobs. These functions can be realized by generalizing the common 3-D LoG scale-space blob detector to a 5-D gLoG scale-space one, where the five parameters are image-domain coordinates (x, y), scales (σx, σy), and orientation (θ), respectively. Instead of searching the local extrema of the image's 5-D gLoG scale space for locating blobs, a more feasible solution is given by locating the local maxima of an intermediate map, which is obtained by aggregating the log-scale-normalized convolution responses of each individual gLoG filter. The proposed gLoG-based blob detector is applied to both biomedical images and natural ones such as general road-scene images. For the biomedical applications on pathological and fluorescent microscopic images, the gLoG blob detector can accurately detect the centers and estimate the sizes and orientations of cell nuclei. These centers are utilized as markers for a watershed-based touching-cell splitting method to split touching nuclei and counting cells in segmentation-free images. For the application on road images, the proposed detector can produce promising estimation of texture orientations, achieving an accurate texture-based road vanishing point detection method. The implementation of our method is quite straightforward due to a very small number of tunable parameters. Fast-match is a fast and effective algorithm for template matching. However, when matching colour images, the images are converted into greyscale images. The colour information is lost in this process, resulting in errors in areas with distinctive colours but similar greyscale values An improved fast-match algorithm that utilises all three RGB channels to construct colour sum-of-absolute-differences (CSAD) is proposed, thus improving the sum-of-absolute-differences distance used in fast-match. In this algorithm, each pixel in the image is categorised by clustering them using density-based spatial clustering of applications with noise (DBSCAN) algorithm over the RGB vector, then the number of pixels in each category and the cumulative RGB values for each RGB channel are calculated to identify the centroid of each category. The RGB vector centroid is used as the CSAD decision criteria, and inverse of number of pixels in each category is used as the differentiating coefficient to construct a new similarity measure. Experiment results demonstrate that this algorithm has significant higher accuracy for matching colour images than the original fast-match algorithm.
Abstract of query paper
Cite abstracts
937
936
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization. Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
Abstract of query paper
Cite abstracts
938
937
Style transfer is a technique for combining two images based on the activations and feature statistics in a deep learning neural network architecture. This paper studies the analogous task in the audio domain and takes a critical look at the problems that arise when adapting the original vision-based framework to handle spectrogram representations. We conclude that CNN architectures with features based on 2D representations and convolutions are better suited for visual images than for time–frequency representations of audio. Despite the awkward fit, experiments show that the Gram matrix determined “style” for audio is more closely aligned with timbral signatures without temporal structure, whereas network layer activity determining audio “content” seems to capture more of the pitch and rhythmic structures. We shed insight on several reasons for the domain differences with illustrative examples. We motivate the use of several types of one-dimensional CNNs that generate results that are better aligned with intuitive notions of audio texture than those based on existing architectures built for images. These ideas also prompt an exploration of audio texture synthesis with architectural variants for extensions to infinite textures, multi-textures, parametric control of receptive fields and the constant-Q transform as an alternative frequency scaling for the spectrogram.
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks. In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
Abstract of query paper
Cite abstracts
939
938
Style transfer is a technique for combining two images based on the activations and feature statistics in a deep learning neural network architecture. This paper studies the analogous task in the audio domain and takes a critical look at the problems that arise when adapting the original vision-based framework to handle spectrogram representations. We conclude that CNN architectures with features based on 2D representations and convolutions are better suited for visual images than for time–frequency representations of audio. Despite the awkward fit, experiments show that the Gram matrix determined “style” for audio is more closely aligned with timbral signatures without temporal structure, whereas network layer activity determining audio “content” seems to capture more of the pitch and rhythmic structures. We shed insight on several reasons for the domain differences with illustrative examples. We motivate the use of several types of one-dimensional CNNs that generate results that are better aligned with intuitive notions of audio texture than those based on existing architectures built for images. These ideas also prompt an exploration of audio texture synthesis with architectural variants for extensions to infinite textures, multi-textures, parametric control of receptive fields and the constant-Q transform as an alternative frequency scaling for the spectrogram.
In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
Abstract of query paper
Cite abstracts
940
939
Obtaining sound inferences over remote networks via active or passive measurements is difficult. Active measurement campaigns face challenges of load, coverage, and visibility. Passive measurements require a privileged vantage point. Even networks under our own control too often remain poorly understood and hard to diagnose. As a step toward the democratization of Internet measurement, we consider the inferential power possible were the network to include a constant and predictable stream of dedicated lightweight measurement traffic. We posit an Internet "heartbeat," which nodes periodically send to random destinations, and show how aggregating heartbeats facilitates introspection into parts of the network that are today generally obtuse. We explore the design space of an Internet heartbeat, potential use cases, incentives, and paths to deployment.
The performance of Internet services is intrinsically tied to propagation delays between end points (i.e., network latency). Standard active probe-based or passive host-based methods for measuring end-to-end latency are difficult to deploy at scale and typically offer limited precision and accuracy. In this paper, we investigate a novel but non-obvious source of latency measurement---logs from network time protocol (NTP) servers. Using NTP-derived data for studying latency is compelling due to NTP's pervasive use in the Internet and its inherent focus on accurate end-to-end delay estimation. We consider the efficacy of an NTP-based approach for studying propagation delays by analyzing logs collected from 10 NTP servers distributed across the United States. These logs include over 73M latency measurements to 7.4M worldwide clients (as indicated by unique IP addresses) collected over the period of one day. Our initial analysis of the general characteristics of propagation delays derived from the log data reveals that delay measurements from NTP must be carefully filtered in order to extract accurate results. We develop a filtering process that removes measurements that are likely to be inaccurate. After applying our filter to NTP measurements, we report on the scope and reach for US-based clients and the characteristics of the end-to-end latency for those clients. Network operators often apply policy-based traffic filtering at the egress of edge networks. These policies can be detected by performing active measurements; however, doing so involves instrumenting every network one wishes to study. We investigate a methodology for detecting policy-based service-level traffic filtering from passive observation of traffic markers within darknets. Such markers represent traffic we expect to arrive and, therefore, whose absence is suggestive of network filtering. We study the approach with data from five large darknets over the course of one week. While we show the approach has utility to expose filtering in some cases, there are also limits to the methodology. The monitoring of packets destined for routeable, yet unused, Internet addresses has proved to be a useful technique for measuring a variety of specific Internet phenomenon (e.g., worms, DDoS). In 2004, stepped beyond these targeted uses and provided one of the first generic characterizations of this non-productive traffic, demonstrating both its significant size and diversity. However, the six years that followed this study have seen tremendous changes in both the types of malicious activity on the Internet and the quantity and quality of unused address space. In this paper, we revisit the state of Internet "background radiation" through the lens of two unique data-sets: a five-year collection from a single unused 8 network block, and week-long collections from three recently allocated 8 network blocks. Through the longitudinal study of the long-lived block, comparisons between blocks, and extensive case studies of traffic in these blocks, we characterize the current state of background radiation specifically highlighting those features that remain invariant from previous measurements and those which exhibit significant differences. Of particular interest in this work is the exploration of address space pollution, in which significant non uniform behavior is observed. However, unlike previous observations of differences between unused blocks, we show that increasingly these differences are the result of environmental factors (e.g., misconfiguration, location), rather than algorithmic factors. Where feasible, we offer suggestions for clean up of these polluted blocks and identify those blocks whose allocations should be withheld. In the first months of 2011, Internet communications were disrupted in several North African countries in response to civilian protests and threats of civil war. In this paper we analyze episodes of these disruptions in two countries: Egypt and Libya. Our analysis relies on multiple sources of large-scale data already available to academic researchers: BGP interdomain routing control plane data; unsolicited data plane traffic to unassigned address space; active macroscopic traceroute measurements; RIR delegation files; and MaxMind's geolocation database. We used the latter two data sets to determine which IP address ranges were allocated to entities within each country, and then mapped these IP addresses of interest to BGP-announced address ranges (prefixes) and origin ASes using publicly available BGP data repositories in the U.S. and Europe. We then analyzed observable activity related to these sets of prefixes and ASes throughout the censorship episodes. Using both control plane and data plane data sets in combination allowed us to narrow down which forms of Internet access disruption were implemented in a given region over time. Among other insights, we detected what we believe were Libya's attempts to test firewall-based blocking before they executed more aggressive BGP-based disconnection. Our methodology could be used, and automated, to detect outages or similar macroscopically disruptive events in other geographic or topological regions.
Abstract of query paper
Cite abstracts
941
940
Obtaining sound inferences over remote networks via active or passive measurements is difficult. Active measurement campaigns face challenges of load, coverage, and visibility. Passive measurements require a privileged vantage point. Even networks under our own control too often remain poorly understood and hard to diagnose. As a step toward the democratization of Internet measurement, we consider the inferential power possible were the network to include a constant and predictable stream of dedicated lightweight measurement traffic. We posit an Internet "heartbeat," which nodes periodically send to random destinations, and show how aggregating heartbeats facilitates introspection into parts of the network that are today generally obtuse. We explore the design space of an Internet heartbeat, potential use cases, incentives, and paths to deployment.
Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications. In this paper, we give an overview of the components and capabilities of this large-scale distributed computing platform, and offer some insight into its architecture, design principles, operation, and management.
Abstract of query paper
Cite abstracts
942
941
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
Mobility improvement for patients is one of the primary concerns of physiotherapy rehabilitation. Providing the physiotherapist and the patient with a quantified and objective measure of progress can be beneficial for monitoring the patient's performance. In this paper, two approaches are introduced for quantifying patient performance. Both approaches formulate a distance between patient data and the healthy population as the measure of performance. Distance measures are defined to capture the performance of one repetition of an exercise or multiple repetitions of the same exercise. To capture patient progress across multiple exercises, a quality measure and overall score are defined based on the distance measures and are used to quantify the overall performance for each session. The effectiveness of these measures in detecting patient progress is evaluated on rehabilitation data recorded from patients recovering from knee or hip replacement surgery. The results show that the proposed measures are able to capture the trend of patient improvement over the course of rehabilitation. The trend of improvement is not monotonic and differs between patients. This paper introduces and evaluates the use of Gaussian mixture models (GMMs) for multiple limb motion classification using continuous myoelectric signals. The focus of this work is to optimize the configuration of this classification scheme. To that end, a complete experimental evaluation of this system is conducted on a 12 subject database. The experiments examine the GMMs algorithmic issues including the model order selection and variance limiting, the segmentation of the data, and various feature sets including time-domain features and autoregressive features. The benefits of postprocessing the results using a majority vote rule are demonstrated. The performance of the GMM is compared to three commonly used classifiers: a linear discriminant analysis, a linear perceptron network, and a multilayer perceptron neural network. The GMM-based limb motion classification system demonstrates exceptional classification accuracy and results in a robust method of motion classification with low computational load. To successfully interact with and learn from humans in cooperative modes, robots need a mechanism for recognizing, characterizing, and emulating human skills. In particular, it is our interest to develop the mechanism for recognizing and emulating simple human actions, i.e., a simple activity in a manual operation where no sensory feedback is available. To this end, we have developed a method to model such actions using a hidden Markov model (HMM) representation. We proposed an approach to address two critical problems in action modeling: classifying human action-intent, and learning human skill, for which we elaborated on the method, procedure, and implementation issues in this paper. This work provides a framework for modeling and learning human actions from observations. The approach can be applied to intelligent recognition of manual actions and high-level programming of control input within a supervisory control paradigm, as well as automatic transfer of human skills to robotic systems. Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial magnetic sensor modules for real-time human body motion tracking The main objective of this paper is to develop an efficient method for learning and reproduction of complex trajectories for robot programming by demonstration. Encoding of the demonstrated trajectories is performed with hidden Markov model, and generation of a generalized trajectory is achieved by using the concept of key points. Identification of the key points is based on significant changes in position and velocity in the demonstrated trajectories. The resulting sequences of trajectory key points are temporally aligned using the multidimensional dynamic time warping algorithm, and a generalized trajectory is obtained by smoothing spline interpolation of the clustered key points. The principal advantage of our proposed approach is utilization of the trajectory key points from all demonstrations for generation of a generalized trajectory. In addition, variability of the key points' clusters across the demonstrated set is employed for assigning weighting coefficients, resulting in a generalization procedure which accounts for the relevance of reproduction of different parts of the trajectories. The approach is verified experimentally for trajectories with two different levels of complexity.
Abstract of query paper
Cite abstracts
943
942
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works. Understanding human actions in wild videos is an important task with a broad range of applications. In this paper we propose a novel approach named Hierarchical Attention Network (HAN), which enables to incorporate static spatial information, short-term motion information and long-term video temporal structures for complex human action understanding. Compared to recent convolutional neural network based approaches, HAN has following advantages (1) HAN can efficiently capture video temporal structures in a longer range; (2) HAN is able to reveal temporal transitions between frame chunks with different time steps, i.e. it explicitly models the temporal transitions between frames as well as video segments and (3) with a multiple step spatial temporal attention mechanism, HAN automatically learns important regions in video frames and temporal segments in the video. The proposed model is trained and evaluated on the standard video action benchmarks, i.e., UCF-101 and HMDB-51, and it significantly outperforms the state-of-the arts Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model,both on the small human action recognition data set of SBU and the currently largest NTU dataset. Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skelet al data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skelet al motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction. Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatio-temporal graphs are a popular tool for imposing such high-level intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatio-temporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks. Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4 on average; outperforming some of the previous reported results by up to 9 . Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. Objective: The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods: The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results: The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion: The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. The ability to accurately observe human motion and identify human activities is essential for developing automatic rehabilitation and sports training systems. In this paper, large-scale exercise motion data obtained from a forearm-worn wearable sensor are classified with a convolutional neural network (CNN). Time series data consisting of accelerometer and orientation measurements are formatted as "images", allowing the CNN to automatically extract discriminative features. The resulting CNN classifies 50 gym exercises with 92.14 accuracy. A comparative study on the effects of image formatting and different CNN architectures is also presented. We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoid drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units. Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. Mobile sensing and computing applications usually require time-series inputs from sensors, such as accelerometers, gyroscopes, and magnetometers. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and heterogeneous user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, we show that DeepSense is feasible to implement on smartphones and embedded devices thanks to its moderate energy consumption and low latency.
Abstract of query paper
Cite abstracts
944
943
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
In this paper, we describe methods for assessment of exercise quality using body-worn tri-axial accelerometers. We assess exercise quality by building a classifier that labels incorrect exercises. The incorrect performances are divided into a number of classes of errors as defined by a physical therapist. We focus on exercises commonly prescribed for knee osteoarthritis: standing hamstring curl, reverse hip abduction, and lying straight leg raise. The methods presented here will form the basis for an at-home rehabilitation device that will recognize errors in patient exercise performance, provide appropriate feedback on the performance, and motivate the patient to continue the prescribed regimen. Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained. Recent advances of robotic mechanical devices enable us to measure a subjectpsilas performance in an objective and precise manner. The main issue of using such devices is how to represent huge experimental data compactly in order to analyze and compare them with clinical data efficiently. In this paper, we choose a subset of features from real-time experimental data and build a classifier model to assess stroke patientspsila upper limb functionality. We compare our model with combinations of different classifiers and ensemble schemes, showing that it outperforms competitors. We also demonstrate that our results from experimental data are consistent with clinical information, and can capture changes of upper-limb functionality over time.
Abstract of query paper
Cite abstracts
945
944
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
This paper reports on an optimum dynamic progxamming (DP) based time-normalization algorithm for spoken word recognition. First, a general principle of time-normalization is given using time-warping function. Then, two time-normalized distance definitions, called symmetric and asymmetric forms, are derived from the principle. These two forms are compared with each other through theoretical discussions and experimental studies. The symmetric form algorithm superiority is established. A new technique, called slope constraint, is successfully introduced, in which the warping function slope is restricted so as to improve discrimination between words in different categories. The effective slope constraint characteristic is qualitatively analyzed, and the optimum slope constraint condition is determined through experiments. The optimized algorithm is then extensively subjected to experimental comparison with various DP-algorithms, previously applied to spoken word recognition by different research groups. The experiment shows that the present algorithm gives no more than about two-thirds errors, even compared to the best conventional algorithm. Most formal rehabilitation facilities are situated in a hospital or care center setting, which may not always be conveniently accessible for patients, especially those in geographically isolated areas. Home-based rehabilitation has potential to offer greater accessibility and thus increase consistent uptake. In addition, the exercise performed in conventional rehabilitation contexts may be insufficient to ensure the patient's speedy recovery, with complimentary rehabilitation exercises at home required to make a difference. The goal is to provide effective home-based rehabilitation offering outcomes similar to those obtained through hospital-based rehabilitation under the supervision of an occupational therapist. This paper presents the development of a Kinect-based system for ensuring home-based rehabilitation using a Dynamic Time Warping (DTW) algorithm and fuzzy logic. The ultimate goal is to assist patients in conducting safe and effective home-based rehabilitation without the immediate supervision of a physician. Mobility improvement for patients is one of the primary concerns of physiotherapy rehabilitation. Providing the physiotherapist and the patient with a quantified and objective measure of progress can be beneficial for monitoring the patient's performance. In this paper, two approaches are introduced for quantifying patient performance. Both approaches formulate a distance between patient data and the healthy population as the measure of performance. Distance measures are defined to capture the performance of one repetition of an exercise or multiple repetitions of the same exercise. To capture patient progress across multiple exercises, a quality measure and overall score are defined based on the distance measures and are used to quantify the overall performance for each session. The effectiveness of these measures in detecting patient progress is evaluated on rehabilitation data recorded from patients recovering from knee or hip replacement surgery. The results show that the proposed measures are able to capture the trend of patient improvement over the course of rehabilitation. The trend of improvement is not monotonic and differs between patients.
Abstract of query paper
Cite abstracts
946
945
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
Movement primitive segmentation enables long sequences of human movement observation data to be segmented into smaller components, termed movement primitives , to facilitate movement identification, modeling, and learning. It has been applied to exercise monitoring, gesture recognition, human–machine interaction, and robot imitation learning. This paper proposes a segmentation framework to categorize and compare different segmentation algorithms considering segment definitions, data sources, application-specific requirements, algorithm mechanics, and validation techniques. The framework is applied to human motion segmentation methods by grouping them into online, semionline, and offline approaches. Among the online approaches, distance-based methods provide the best performance, while stochastic dynamic models work best in the semionline and offline settings. However, most algorithms to date are tested with small datasets, and algorithm generalization across participants and to movement changes remains largely untested. Abstract In this paper, a Hidden Semi-Markov Model (HSMM) based approach is proposed to evaluate and monitor body motion during a rehabilitation training program. The approach extracts clinically relevant motion features from skeleton joint trajectories, acquired by the RGB-D camera, and provides a score for the subject’s performance. The approach combines different aspects of rule and template based methods. The features have been defined by clinicians as exercise descriptors and are then assessed by a HSMM, trained upon an exemplar motion sequence. The reliability of the proposed approach is studied by evaluating its correlation with both a clinical assessment and a Dynamic Time Warping (DTW) algorithm, while healthy and neurological disabled people performed physical exercises. With respect to the discrimination between healthy and pathological conditions, the HSMM based method correlates better with the physician’s score than DTW. The study supports the use of HSMMs to assess motor performance providing a quantitative feedback to physiotherapist and patients. This result is particularly appropriate and useful for a remote assessment in the home. Objective: The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods: The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results: The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion: The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
Abstract of query paper
Cite abstracts
947
946
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research. One important paradigm towards this has been providing algorithms for adversarially learning fair classifiers (, 2018; , 2018). We formulate the adversarial learning problem as a multi-objective optimization problem and find the fair model using gradient descent-ascent algorithm with a modified gradient update step, inspired by the approach of , 2018. We provide theoretical insight and guarantees that formalize the heuristic arguments presented previously towards taking such an approach. We test our approach empirically on the Adult dataset and synthetic datasets and compare against state of the art algorithms (, 2018; , 2018; , 2017). The results show that our models and algorithms have comparable or better accuracy than other algorithms while performing better in terms of fairness, as measured using statistical rate or false discovery rate.
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
Abstract of query paper
Cite abstracts
948
947
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research. One important paradigm towards this has been providing algorithms for adversarially learning fair classifiers (, 2018; , 2018). We formulate the adversarial learning problem as a multi-objective optimization problem and find the fair model using gradient descent-ascent algorithm with a modified gradient update step, inspired by the approach of , 2018. We provide theoretical insight and guarantees that formalize the heuristic arguments presented previously towards taking such an approach. We test our approach empirically on the Adult dataset and synthetic datasets and compare against state of the art algorithms (, 2018; , 2018; , 2017). The results show that our models and algorithms have comparable or better accuracy than other algorithms while performing better in terms of fairness, as measured using statistical rate or false discovery rate.
Machine learning is a tool for building models that accurately represent input training data. When undesired biases concerning demographic groups are in the training data, well-trained models will reflect those biases. We present a framework for mitigating such biases by including a variable for the group of interest and simultaneously learning a predictor and an adversary. The input to the network X, here text or census data, produces a prediction Y, such as an analogy completion or income bracket, while the adversary tries to model a protected variable Z, here gender or zip code. The objective is to maximize the predictor's ability to predict Y while minimizing the adversary's ability to predict Z. Applied to analogy completion, this method results in accurate predictions that exhibit less evidence of stereotyping Z. When applied to a classification task using the UCI Adult (Census) Dataset, it results in a predictive model that does not lose much accuracy while achieving very close to equality of odds (Hardt, et al, 2016). The method is flexible and applicable to multiple definitions of fairness as well as a wide range of gradient-based learning models, including both regression and classification tasks. Recidivism prediction scores are used across the USA to determine sentencing and supervision for hundreds of thousands of inmates. One such generator of recidivism prediction scores is Northpointe's Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) score, used in states like California and Florida, which past research has shown to be biased against black inmates according to certain measures of fairness. To counteract this racial bias, we present an adversarially-trained neural network that predicts recidivism and is trained to remove racial bias. When comparing the results of our model to COMPAS, we gain predictive accuracy and get closer to achieving two out of three measures of fairness: parity and equality of odds. Our model can be generalized to any prediction and demographic. This piece of research contributes an example of scientific replication and simplification in a high-stakes real-world application like recidivism prediction.
Abstract of query paper
Cite abstracts
949
948
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research. One important paradigm towards this has been providing algorithms for adversarially learning fair classifiers (, 2018; , 2018). We formulate the adversarial learning problem as a multi-objective optimization problem and find the fair model using gradient descent-ascent algorithm with a modified gradient update step, inspired by the approach of , 2018. We provide theoretical insight and guarantees that formalize the heuristic arguments presented previously towards taking such an approach. We test our approach empirically on the Adult dataset and synthetic datasets and compare against state of the art algorithms (, 2018; , 2018; , 2017). The results show that our models and algorithms have comparable or better accuracy than other algorithms while performing better in terms of fairness, as measured using statistical rate or false discovery rate.
We study the close connections between game theory, on-line prediction and boosting. After a brief review of game theory, we describe an algorithm for learning to play repeated games based on the on-line prediction methods of Littlestone and Warmuth. The analysis of this algorithm yields a simple proof of von Neumann’s famous minmax theorem, as well as a provable method of approximately solving a game. We then show that the on-line prediction model is obtained by applying this gameplaying algorithm to an appropriate choice of game and that boosting is obtained by applying the same algorithm to the “dual” of this game. We present a systematic approach for achieving fairness in a binary classification setting. While we focus on two well-known quantitative definitions of fairness, our approach encompasses many other previously studied definitions as special cases. The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints. We introduce two reductions that work for any representation of the cost-sensitive classifier and compare favorably to prior baselines on a variety of data sets, while overcoming several of their disadvantages.
Abstract of query paper
Cite abstracts
950
949
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
Leptothorax unifasciatus ant colonies occupy flat crevices in rocks in which their brood is kept in a single cluster. In artificial nests made from two glass plates sandwiched together, designed to mimic the general proportions of their nest sites in the field, such colonies arrange their brood in a distinct pattern. These patterns may influence the priority with which different brood are tended, and may therefore influence both the division of labour and colony demography. Different brood stages are arranged in concentric rings in a single cluster centred around the eggs and micro-larvae. Successively larger larvae are arranged in progressive bands away from the centre of the brood cluster. However, the largest and oldest brood items, the prepupae and pupae, are placed in an intermediate position between the largest and most peripheral larvae and the larvae of medium size. Dirichlet tessellations are used to analyze these patterns and show that the tile areas, the area closer to each item than its neighbours, allocated to each type of item increase with distance from the centre of the brood cluster. There is a significant positive correlation between such tile areas and the estimated metabolic rates of each type of brood item. The ants may be creating a “domain of care” around each brood item proportional to that item's needs. If nurse workers tend to move to the brood item whose tile they happen to be within when they have care to donate, they may apportion such care according to the needs of each type of brood. When colonies emigrate to new nests they rapidly recreate these characteristic brood patterns. The establishment and maintenance of precisely organized tissues requires the formation of sharp borders between distinct cell populations. The maintenance of segregated cell populations is also required for tissue homeostasis in the adult, and deficiencies in segregation underlie the metastatic spreading of tumor cells. Three classes of mechanisms that underlie cell segregation and border formation have been uncovered. The first involves differences in cadherin-mediated cell–cell adhesion that establishes interfacial tension at the border between distinct cell populations. A second mechanism involves the induction of actomyosin-mediated contraction by intercellular signaling, such that cortical tension is generated at the border. Third, activation of Eph receptors and ephrins can lead to both decreased adhesion by triggering cleavage of E-cadherin, and to repulsion of cells by regulation of the actin cytoskeleton, thus preventing intermingling between cell populations. These mechanisms play crucial roles at distinct boundaries during development, and alterations in cadherin or Eph ephrin expression have been implicated in tumor metastasis.
Abstract of query paper
Cite abstracts
951
950
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
We introduce a framework, called “physicomimetics,” that provides distributed control of large collections of mobile physical agents in sensor networks. The agents sense and react to virtual forces, which are motivated by natural physics laws. Thus, physicomimetics is founded upon solid scientific principles. Furthermore, this framework provides an effective basis for self-organization, fault-tolerance, and self-repair. Three primary factors distinguish our framework from others that are related: an emphasis on minimality (e.g., cost effectiveness of large numbers of agents implies a need for expendable platforms with few sensors), ease of implementation, and run-time efficiency. Examples are shown of how this framework has been applied to construct various regular geometric lattice configurations (distributed sensing grids), as well as dynamic behavior for perimeter defense and surveillance. Analyses are provided that facilitate system understanding and predictability, including both qualitative and quantitative analyses of potential energy and a system phase transition. Physicomimetics has been implemented both in simulation and on a team of seven mobile robots. Specifics of the robotic embodiment are presented in the paper.
Abstract of query paper
Cite abstracts
952
951
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
When a mixture of particles with different attributes undergoes vibration, a segregation pattern is often observed. For example, in muesli cereal packs, the largest particles---the Brazil nuts---tend to end up at the top. For this reason, the phenomenon is known as the Brazil nut effect. In previous research, an algorithm inspired by this effect was designed to produce segregation patterns in swarms of simulated agents that move on a horizontal plane. In this paper, we adapt this algorithm for implementation on robots with directional vision. We use the e-puck robot as a platform to test our implementation. In a swarm of e-pucks, different robots mimic disks of different sizes (larger than their physical dimensions). The motion of every robot is governed by a combination of three components: (i) attraction towards a point, which emulates the effect of a gravitational pull, (ii) random motion, which emulates the effect of vibration, and (iii) repulsion from nearby robots, which emulates the effect of collisions between disks. The algorithm does not require robots to discriminate between other robots; yet, it is capable of forming annular structures where the robots in each annulus represent disks of identical size. We report on a set of experiments performed with a group of 20 physical e-pucks. The results obtained in 100 trials of 20 minutes each show that the percentage of incorrectly-ordered pairs of disks from different groups decreases as the size ratio of disks in different groups is increased. In our experiments, this percentage was, on average, below 0.5 for size ratios from 3.0 to 5.0. Moreover, for these size ratios, all segregation errors observed were due to mechanical failures that caused robots to stop moving. We study a simple algorithm inspired by the Brazil nut effect for achieving segregation in a swarm of mobile robots. The algorithm lets each robot mimic a particle of a certain size and broadcast this information locally. The motion of each particle is controlled by three reactive behaviors: random walk, taxis, and repulsion by other particles. The segregation task requires the swarm to self-organize into a spatial arrangement in which the robots are ranked by particle size (e.g., annular structures or stripes).
Abstract of query paper
Cite abstracts
953
952
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
There are several examples in natural systems that exhibit the self-organizing behavior of segregation when different types of units interact with each other. One of the best examples is a system of biological cells of heterogeneous types that has the ability to self-organize into specific formations, form different types of organs and, ultimately, develop into a living organism. Previous research in this area has indicated that such segregations in biological cells and tissues are made possible because of the differences in adhesivity between various types of cells or tissues. Inspired by this differential adhesivity model, this technical note presents a decentralized approach utilizing differential artificial potential to achieve the segregation behavior in a swarm of heterogeneous robotic agents. The method is based on the proposition that agents experience different magnitudes of potential while interacting with agents of different types. Stability analysis of the system with the proposed approach in the Lyapunov sense is carried out in this technical note. Extensive simulations and analytical investigations suggest that the proposed method would lead a population of two types of agents to a segregated configuration.
Abstract of query paper
Cite abstracts
954
953
Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.
In this paper we consider the problem of finding sets of points that conform to a given underlying model from within a dense, noisy set of observations. This problem is motivated by the task of efficiently linking faint asteroid detections, but is applicable to a range of spatial queries. We survey current tree-based approaches, showing a trade-off exists between single tree and multiple tree algorithms. To this end, we present a new type of multiple tree algorithm that uses a variable number of trees to exploit the advantages of both approaches. We empirically show that this algorithm performs well using both simulated and astronomical data. ABSTRACT.We describe the Pan-STARRS Moving Object Processing System (MOPS), a modern software package that produces automatic asteroid discoveries and identifications from catalogs of transient detections from next-generation astronomical survey telescopes. MOPS achieves >99.5 >99.5 efficiency in producing orbits from a synthetic but realistic population of asteroids whose measurements were simulated for a Pan-STARRS4-class telescope. Additionally, using a nonphysical grid population, we demonstrate that MOPS can detect populations of currently unknown objects such as interstellar asteroids. MOPS has been adapted successfully to the prototype Pan-STARRS1 telescope despite differences in expected false detection rates, fill-factor loss, and relatively sparse observing cadence compared to a hypothetical Pan-STARRS4 telescope and survey. MOPS remains highly efficient at detecting objects but drops to 80 efficiency at producing orbits. This loss is primarily due to configurable MOPS processing limits that a...
Abstract of query paper
Cite abstracts
955
954
Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.
In this work we present a system for autonomous discovery of asteroids, space trash and other moving objects. This system performs astronomical image data reduction based on an image processing pipeline. The processing steps of the pipeline include astrometric and photometric reduction, sequence alignment, moving object detection and astronomical analysis, making the system capable of discovering and monitoring previously unknown moving objects in the night sky. Abstract ESA's 1-m telescope on Tenerife, the Optical Ground Station (OGS), has been used for observing NEOs since 2009. Part of the observational activity is the demonstration and test of survey observation strategies. During the observations, a total of 11 near-Earth objects have been discovered in about 360 h of observing time from 2009 to 2014. The survey observations are performed by imaging the same area in the sky 3 or 4 times within a 15–20 min time interval. A software robot analyses the images, searching for moving objects. The survey strategies and related data processing algorithms are described in this paper.
Abstract of query paper
Cite abstracts
956
955
Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30 relative improvement in label noise robustness and a 10 absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. In the last two years, convolutional neural networks (CNNs) have achieved an impressive suite of results on standard recognition datasets and tasks. CNN-based features seem poised to quickly replace engineered representations, such as SIFT and HOG. However, compared to SIFT and HOG, we understand much less about the nature of the features learned by large CNNs. In this paper, we experimentally probe several aspects of CNN feature learning in an attempt to help practitioners gain useful, evidence-backed intuitions about how to apply CNNs to computer vision problems.
Abstract of query paper
Cite abstracts
957
956
Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30 relative improvement in label noise robustness and a 10 absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
The growing importance of massive datasets with the advent of deep learning makes robustness to label noise a critical property for classifiers to have. Sources of label noise include automatic labeling for large datasets, non-expert labeling, and label corruption by data poisoning adversaries. In the latter case, corruptions may be arbitrarily bad, even so bad that a classifier predicts the wrong labels with high confidence. To protect against such sources of noise, we leverage the fact that a small set of clean labels is often easy to procure. We demonstrate that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and propose a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers. Across vision and natural language processing tasks, we experiment with various label noises at several strengths, and show that our method significantly outperforms existing methods. It is challenging to train deep neural networks robustly with noisy labels, as the capacity of deep neural networks is so high that they can totally over-fit on these noisy labels. In this paper, motivated by the memorization effects of deep networks, which shows networks fit clean instances first and then noisy ones, we present a new paradigm called '' combating with noisy labels. We train two networks simultaneously. First, in each mini-batch data, each network filters noisy instances based on memorization effects. Then, it teaches the remained instances to its peer network for updating the parameters. Empirical results on benchmark datasets demonstrate that, the robustness of deep learning models trained by Co-teaching approach is much superior than that of state-of-the-art methods. We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures &#x2014; stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers &#x2014; demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise. Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs). We propose a new perspective for understanding DNN generalization for such datasets, by investigating the dimensionality of the deep representation subspace of training samples. We show that from a dimensionality perspective, DNNs exhibit quite distinctive learning styles when trained with clean labels versus when trained with a proportion of noisy labels. Based on this finding, we develop a new dimensionality-driven learning strategy, which monitors the dimensionality of subspaces during training and adapts the loss function accordingly. We empirically demonstrate that our approach is highly tolerant to significant proportions of noisy labels, and can effectively learn low-dimensional local subspaces that capture the data distribution.
Abstract of query paper
Cite abstracts
958
957
Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30 relative improvement in label noise robustness and a 10 absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small detector'' subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack. Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses. The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
Abstract of query paper
Cite abstracts
959
958
To understand the dynamics of optimization in deep neural networks, we develop a tool to study the evolution of the entire Hessian spectrum throughout the optimization process. Using this, we study a number of hypotheses concerning smoothness, curvature, and sharpness in the deep learning literature. We then thoroughly analyze a crucial structural feature of the spectra: in non-batch normalized networks, we observe the rapid appearance of large isolated eigenvalues in the spectrum, along with a surprising concentration of the gradient in the corresponding eigenspaces. In batch normalized networks, these two effects are almost absent. We characterize these effects, and explain how they affect optimization speed through both theory and experiments. As part of this work, we adapt advanced tools from numerical linear algebra that allow scalable and accurate estimation of the entire Hessian spectrum of ImageNet-scale neural networks; this technique may be of independent interest in other applications.
Previous works observed the spectrum of the Hessian of the training loss of deep neural networks. However, the networks considered were of minuscule size. We apply state-of-the-art tools in modern high-dimensional numerical linear algebra to approximate the spectrum of the Hessian of deep nets with tens of millions of parameters. Our results corroborate previous findings, based on small-scale networks, that the Hessian exhibits 'spiked' behavior, with several outliers isolated from a continuous bulk. However we find that the bulk does not follow a simple Marchenko-Pastur distribution, as previously suggested, but rather a heavier-tailed distribution. Finally, we document the dynamics of the outliers and the bulk with varying sample size.
Abstract of query paper
Cite abstracts
960
959
In this paper, we present a two-stream multi-task network for fashion recognition. This task is challenging as fashion clothing always contain multiple attributes, which need to be predicted simultaneously for real-time industrial systems. To handle these challenges, we formulate fashion recognition into a multi-task learning problem, including landmark detection, category and attribute classifications, and solve it with the proposed deep convolutional neural network. We design two knowledge sharing strategies which enable information transfer between tasks and improve the overall performance. The proposed model achieves state-of-the-art results on large-scale fashion dataset comparing to the existing methods, which demonstrates its great effectiveness and superiority for fashion recognition.
The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods. Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion. In this paper, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same item in an online shop. This is an extremely challenging task due to visual differences between street photos (pictures of people wearing clothing in everyday uncontrolled settings) and online shop photos (pictures of clothing items on people, mannequins, or in isolation, captured by professionals in more controlled settings). We collect a new dataset for this application containing 404,683 shop photos collected from 25 different online retailers and 20,357 street photos, providing a total of 39,479 clothing item matches between street and shop photos. We develop three different methods for Exact Street to Shop retrieval, including two deep learning baseline methods, and a method to learn a similarity measure between the street and shop domains. Experiments demonstrate that our learned similarity significantly outperforms our baselines that use existing deep learning based representations. Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system. Visual fashion analysis has attracted many attentions in the recent years. Previous work represented clothing regions by either bounding boxes or human joints. This work presents fashion landmark detection or fashion alignment, which is to predict the positions of functional key points defined on the fashion items, such as the corners of neckline, hemline, and cuff. To encourage future studies, we introduce a fashion landmark dataset (The dataset is available at http: mmlab.ie.cuhk.edu.hk projects DeepFashion LandmarkDetection.html.) with over 120K images, where each image is labeled with eight landmarks. With this dataset, we study fashion alignment by cascading multiple convolutional neural networks in three stages. These stages gradually improve the accuracies of landmark predictions. Extensive experiments demonstrate the effectiveness of the proposed method, as well as its generalization ability to pose estimation. Fashion landmark is also compared to clothing bounding boxes and human joints in two applications, fashion attribute prediction and clothes retrieval, showing that fashion landmark is a more discriminative representation to understand fashion images.
Abstract of query paper
Cite abstracts
961
960
In this paper, we present a two-stream multi-task network for fashion recognition. This task is challenging as fashion clothing always contain multiple attributes, which need to be predicted simultaneously for real-time industrial systems. To handle these challenges, we formulate fashion recognition into a multi-task learning problem, including landmark detection, category and attribute classifications, and solve it with the proposed deep convolutional neural network. We design two knowledge sharing strategies which enable information transfer between tasks and improve the overall performance. The proposed model achieves state-of-the-art results on large-scale fashion dataset comparing to the existing methods, which demonstrates its great effectiveness and superiority for fashion recognition.
In this paper, we investigate ways of conducting a detailed fashion search using query images and attributes. A credible fashion search platform should be able to (1) find images that share the same attributes as the query image, (2) allow users to manipulate certain attributes, e.g. replace collar attribute from round to v-neck, and (3) handle region-specific attribute manipulations, e.g. replacing the color attribute of the sleeve region without changing the color attribute of other regions. A key challenge to be addressed is that fashion products have multiple attributes and it is important for each of these attributes to have representative features. To address these challenges, we propose the FashionSearchNet which uses a weakly supervised localization method to extract regions of attributes. By doing so, unrelated features can be ignored thus improving the similarity learning. Also, FashionSearchNet incorporates a new procedure that enables region awareness to be able to handle region-specific requests. FashionSearchNet outperforms the most recent fashion search techniques and is shown to be able to carry out different search scenarios using the dynamic queries. We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two subnetworks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268). This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network. Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.
Abstract of query paper
Cite abstracts
962
961
In this paper, we present a two-stream multi-task network for fashion recognition. This task is challenging as fashion clothing always contain multiple attributes, which need to be predicted simultaneously for real-time industrial systems. To handle these challenges, we formulate fashion recognition into a multi-task learning problem, including landmark detection, category and attribute classifications, and solve it with the proposed deep convolutional neural network. We design two knowledge sharing strategies which enable information transfer between tasks and improve the overall performance. The proposed model achieves state-of-the-art results on large-scale fashion dataset comparing to the existing methods, which demonstrates its great effectiveness and superiority for fashion recognition.
This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets. In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we consider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available. In this paper, we propose to predict immediacy for interacting persons from still images. A complete immediacy set includes interactions, relative distance, body leaning direction and standing orientation. These measures are found to be related to the attitude, social relationship, social interaction, action, nationality, and religion of the communicators. A large-scale dataset with 10,000 images is constructed, in which all the immediacy measures and the human poses are annotated. We propose a rich set of immediacy representations that help to predict immediacy from imperfect 1-person and 2-person pose estimation results. A multi-task deep recurrent neural network is constructed to take the proposed rich immediacy representation as input and learn the complex relationship among immediacy predictions multiple steps of refinement. The effectiveness of the proposed approach is proved through extensive experiments on the large scale dataset. We propose an heterogeneous multi-task learning framework for human pose estimation from monocular image with deep convolutional neural network. In particular, we simultaneously learn a pose-joint regressor and a sliding-window body-part detector in a deep network architecture. We show that including the body-part detection task helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several data sets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts. Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.
Abstract of query paper
Cite abstracts
963
962
It is an easy task for humans to learn and generalize a problem, perhaps it is due to their ability to visualize and imagine unseen objects and concepts. The power of imagination comes handy especially when interpolating learnt experience (like seen examples) over new classes of a problem. For a machine learning system, acquiring such powers of imagination are still a hard task. We present a novel approach to low-shot learning that uses the idea of imagination over unseen classes in a classification problem setting. We combine a classifier with a visionary' (i.e., a GAN model) that teaches the classifier to generalize itself over new and unseen classes. This approach can be incorporated into a variety of problem settings where we need a classifier to learn and generalize itself to new and unseen classes. We compare the performance of classifiers with and without the visionary GAN model helping them.
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. Low-shot visual learning---the ability to recognize novel object categories from very few examples---is a hallmark of human visual intelligence. Existing machine learning approaches fail to generalize in the same way. To make progress on this foundational problem, we present a low-shot learning benchmark on complex images that mimics challenges faced by recognition systems in the wild. We then propose a) representation regularization techniques, and b) techniques to hallucinate additional training examples for data-starved classes. Together, our methods improve the effectiveness of convolutional networks in low-shot learning, improving the one-shot accuracy on novel classes by 2.3x on the challenging ImageNet dataset. People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior. Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea. Our approach builds on recent progress in meta-learning ("learning to learn") by combining a meta-learner with a "hallucinator" that produces additional training examples, and optimizing both models jointly. Our hallucinator can be incorporated into a variety of meta-learners and provides significant gains: up to a 6 point boost in classification accuracy when only a single training example is available, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark. We define a process called congealing in which elements of a dataset (images) are brought into correspondence with each other jointly, producing a data-defined model. It is based upon minimizing the summed component-wise (pixel-wise) entropies over a continuous set of transforms on the data. One of the biproducts of this minimization is a set of transform, one associated with each original training sample. We then demonstrate a procedure for effectively bringing test data into correspondence with the data-defined model produced in the congealing process. Subsequently; we develop a probability density over the set of transforms that arose from the congealing process. We suggest that this density over transforms may be shared by many classes, and demonstrate how using this density as "prior knowledge" can be used to develop a classifier based on only a single training example for each class. Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank. We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. We develop a hierarchical Bayesian model that learns categories from single training examples. The model transfers acquired knowledge from previously learned categories to a novel category, in the form of a prior over category means and variances. The model discovers how to group categories into meaningful super-categories that express different priors for new classes. Given a single example of a novel category, we can efficiently infer which super-category the novel category belongs to, and thereby estimate not only the new category's mean but also an appropriate similarity metric based on parameters inherited from the super-category. On MNIST and MSR Cambridge image datasets the model learns useful representations of novel categories based on just a single training example, and performs significantly better than simpler hierarchical Bayesian approaches. It can also discover new categories in a completely unsupervised fashion, given just one or a few examples.
Abstract of query paper
Cite abstracts
964
963
It is an easy task for humans to learn and generalize a problem, perhaps it is due to their ability to visualize and imagine unseen objects and concepts. The power of imagination comes handy especially when interpolating learnt experience (like seen examples) over new classes of a problem. For a machine learning system, acquiring such powers of imagination are still a hard task. We present a novel approach to low-shot learning that uses the idea of imagination over unseen classes in a classification problem setting. We combine a classifier with a visionary' (i.e., a GAN model) that teaches the classifier to generalize itself over new and unseen classes. This approach can be incorporated into a variety of problem settings where we need a classifier to learn and generalize itself to new and unseen classes. We compare the performance of classifiers with and without the visionary GAN model helping them.
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. Low-shot visual learning---the ability to recognize novel object categories from very few examples---is a hallmark of human visual intelligence. Existing machine learning approaches fail to generalize in the same way. To make progress on this foundational problem, we present a low-shot learning benchmark on complex images that mimics challenges faced by recognition systems in the wild. We then propose a) representation regularization techniques, and b) techniques to hallucinate additional training examples for data-starved classes. Together, our methods improve the effectiveness of convolutional networks in low-shot learning, improving the one-shot accuracy on novel classes by 2.3x on the challenging ImageNet dataset. People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior. Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea. Our approach builds on recent progress in meta-learning ("learning to learn") by combining a meta-learner with a "hallucinator" that produces additional training examples, and optimizing both models jointly. Our hallucinator can be incorporated into a variety of meta-learners and provides significant gains: up to a 6 point boost in classification accuracy when only a single training example is available, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark. We define a process called congealing in which elements of a dataset (images) are brought into correspondence with each other jointly, producing a data-defined model. It is based upon minimizing the summed component-wise (pixel-wise) entropies over a continuous set of transforms on the data. One of the biproducts of this minimization is a set of transform, one associated with each original training sample. We then demonstrate a procedure for effectively bringing test data into correspondence with the data-defined model produced in the congealing process. Subsequently; we develop a probability density over the set of transforms that arose from the congealing process. We suggest that this density over transforms may be shared by many classes, and demonstrate how using this density as "prior knowledge" can be used to develop a classifier based on only a single training example for each class. Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank. We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. We develop a hierarchical Bayesian model that learns categories from single training examples. The model transfers acquired knowledge from previously learned categories to a novel category, in the form of a prior over category means and variances. The model discovers how to group categories into meaningful super-categories that express different priors for new classes. Given a single example of a novel category, we can efficiently infer which super-category the novel category belongs to, and thereby estimate not only the new category's mean but also an appropriate similarity metric based on parameters inherited from the super-category. On MNIST and MSR Cambridge image datasets the model learns useful representations of novel categories based on just a single training example, and performs significantly better than simpler hierarchical Bayesian approaches. It can also discover new categories in a completely unsupervised fashion, given just one or a few examples.
Abstract of query paper
Cite abstracts
965
964
Recent studies have shown that imbalance ratio is not the only cause of the performance loss of a classifier in imbalanced data classification. In fact, other data factors, such as small disjuncts, noises and overlapping, also play the roles in tandem with imbalance ratio, which makes the problem difficult. Thus far, the empirical studies have demonstrated the relationship between the imbalance ratio and other data factors only. To the best of our knowledge, there is no any measurement about the extent of influence of class imbalance on the classification performance of imbalanced data. Further, it is also unknown for a dataset which data factor is actually the main barrier for classification. In this paper, we focus on Bayes optimal classifier and study the influence of class imbalance from a theoretical perspective. Accordingly, we propose an instance measure called Individual Bayes Imbalance Impact Index ( @math ) and a data measure called Bayes Imbalance Impact Index ( @math ). @math and @math reflect the extent of influence purely by the factor of imbalance in terms of each minority class sample and the whole dataset, respectively. Therefore, @math can be used as an instance complexity measure of imbalance and @math is a criterion to show the degree of how imbalance deteriorates the classification. As a result, we can therefore use @math to judge whether it is worth using imbalance recovery methods like sampling or cost-sensitive methods to recover the performance loss of a classifier. The experiments show that @math is highly consistent with the increase of prediction score made by the imbalance recovery methods and @math is highly consistent with the improvement of F1 score made by the imbalance recovery methods on both synthetic and real benchmark datasets.
There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods. In this paper we studied re-sampling methods for learning classifiers from imbalanced data. We carried out a series of experiments on artificial data sets to explore the impact of noisy and borderline examples from the minority class on the classifier performance. Results showed that if data was sufficiently disturbed by these factors, then the focused re-sampling methods - NCR and our SPIDER2 - strongly outperformed the oversampling methods. They were also better for real-life data, where PCA visualizations suggested possible existence of noisy examples and large overlapping ares between classes. We present a comprehensive suite of experimentation on the subject of learning from imbalanced data. When classes are imbalanced, many learning algorithms can suffer from the perspective of reduced performance. Can data sampling be used to improve the performance of learners built from imbalanced data? Is the effectiveness of sampling related to the type of learner? Do the results change if the objective is to optimize different performance metrics? We address these and other issues in this work, showing that sampling in many cases will improve classifier performance. We studied three methods to improve identification of difficult small classes by balancing imbalanced class distribution with data reduction. The new method, neighborhood cleaning rule (NCL), outperformed simple random and one-sided selection methods in experiments with ten data sets. All reduction methods improved identification of small classes (20-30 ), but the differences were insignificant. However, significant differences in accuracies, true-positive rates and true-negative rates obtained with the 3-nearest neighbor method and C4.5 from the reduced data favored NCL. The results suggest that NCL is a useful method for improving the modeling of difficult small classes, and for building classifiers to identify these classes from the real-world data.
Abstract of query paper
Cite abstracts
966
965
Recent studies have shown that imbalance ratio is not the only cause of the performance loss of a classifier in imbalanced data classification. In fact, other data factors, such as small disjuncts, noises and overlapping, also play the roles in tandem with imbalance ratio, which makes the problem difficult. Thus far, the empirical studies have demonstrated the relationship between the imbalance ratio and other data factors only. To the best of our knowledge, there is no any measurement about the extent of influence of class imbalance on the classification performance of imbalanced data. Further, it is also unknown for a dataset which data factor is actually the main barrier for classification. In this paper, we focus on Bayes optimal classifier and study the influence of class imbalance from a theoretical perspective. Accordingly, we propose an instance measure called Individual Bayes Imbalance Impact Index ( @math ) and a data measure called Bayes Imbalance Impact Index ( @math ). @math and @math reflect the extent of influence purely by the factor of imbalance in terms of each minority class sample and the whole dataset, respectively. Therefore, @math can be used as an instance complexity measure of imbalance and @math is a criterion to show the degree of how imbalance deteriorates the classification. As a result, we can therefore use @math to judge whether it is worth using imbalance recovery methods like sampling or cost-sensitive methods to recover the performance loss of a classifier. The experiments show that @math is highly consistent with the increase of prediction score made by the imbalance recovery methods and @math is highly consistent with the improvement of F1 score made by the imbalance recovery methods on both synthetic and real benchmark datasets.
We studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of real-world problems to random labelings of points and found that real problems contain structures in this measurement space that are significantly different from the random sets. Distributions of problems in this space show that there exist at least two independent factors affecting a problem's difficulty. We suggest using this space to describe a classifier's domain of competence. This can guide static and dynamic selection of classifiers for specific problems as well as subproblems formed by confinement, projection, and transformations of the feature vectors. Most data complexity studies have focused on characterizing the complexity of the entire data set and do not provide information about individual instances. Knowing which instances are misclassified and understanding why they are misclassified and how they contribute to data set complexity can improve the learning process and could guide the future development of learning algorithms and data analysis methods. The goal of this paper is to better understand the data used in machine learning problems by identifying and analyzing the instances that are frequently misclassified by learning algorithms that have shown utility to date and are commonly used in practice. We identify instances that are hard to classify correctly (instance hardness) by classifying over 190,000 instances from 64 data sets with 9 learning algorithms. We then use a set of hardness measures to understand why some instances are harder to classify correctly than others. We find that class overlap is a principal contributor to instance hardness. We seek to integrate this information into the training process to alleviate the effects of class overlap and present ways that instance hardness can be used to improve learning.
Abstract of query paper
Cite abstracts
967
966
The crossing number of a graph @math is the least number of crossings over all possible drawings of @math . We present a structural characterization of graphs with crossing number one.
Our main result includes the following, slightly surprising, fact: a 4-connected nonplanar graph G has crossing number at least 2 if and only if, for every pair e,f of edges having no common incident vertex, there are vertex-disjoint cycles in G with one containing e and the other containing f.
Abstract of query paper
Cite abstracts
968
967
The crossing number of a graph @math is the least number of crossings over all possible drawings of @math . We present a structural characterization of graphs with crossing number one.
In this paper we deduce a necessary and sufficient condition for a line grah to have crossing number 1. In addition, we prove that the line graph of any nonplanar graph has crossing number greater than 2.
Abstract of query paper
Cite abstracts
969
968
The crossing number of a graph @math is the least number of crossings over all possible drawings of @math . We present a structural characterization of graphs with crossing number one.
A graph is crossing-critical if deleting any edge decreases its crossing number on the plane. It is proved that, for any n >= 3, there is an infinite family of 3-connected crossing-critical graphs with crossing number n. It is proved that every cubic graph with crossing number at least two contains a subdivision of one of eight graphs. A spelling net for a phrase consists of the multigraph whose points are labelled with the set of distinct letters in the phrase and whose lines lie on the Eulerian path obtained in "spelling out" the phrase between the (lettered) points. Spelling nets can also use phonemes or words as labels. An eodermdrome is a non-planar spelling net. Thus, the study of structural properties of eodermdromes is the study of non-planar Eulerian multigraphs. Abstract A graph is crossing-critical if deleting any edge decreases its crossing number on the plane. For any n ⩾ 2 we present a construction of an infinite family of 3-connected crossing-critical graphs with crossing number n . It is very well-known that there are precisely two minimal non-planar graphs: K 5 and K 3 , 3 (degree 2 vertices being irrelevant in this context). In the language of crossing numbers, these are the only 1-crossing-critical graphs: they each have crossing number at least one, and every proper subgraph has crossing number less than one. In 1987, Kochol exhibited an infinite family of 3-connected, simple, 2-crossing-critical graphs. In this work, we: (i) determine all the 3-connected 2-crossing-critical graphs that contain a subdivision of the Mobius Ladder V 10 ; (ii) show how to obtain all the not 3-connected 2-crossing-critical graphs from the 3-connected ones; (iii) show that there are only finitely many 3-connected 2-crossing-critical graphs not containing a subdivision of V 10 ; and (iv) determine all the 3-connected 2-crossing-critical graphs that do not contain a subdivision of V 8 . We prove that, for every positive integer k, there is an integer N such that every 4-connected non-planar graph with at least N vertices has a minor isomorphic to K"4","k, the graph obtained from a cycle of length 2k+1 by adding an edge joining every pair of vertices at distance exactly k, or the graph obtained from a cycle of length k by adding two vertices adjacent to each other and to every vertex on the cycle. We also prove a version of this for subdivisions rather than minors, and relax the connectivity to allow 3-cuts with one side planar and of bounded size. We deduce that for every integer k there are only finitely many 3-connected 2-crossing-critical graphs with no subdivision isomorphic to the graph obtained from a cycle of length 2k by joining all pairs of diagonally opposite vertices.
Abstract of query paper
Cite abstracts
970
969
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
The efficient processing of document streams plays an important role in many information filtering systems. Emerging applications, such as news update filtering and social network notifications, demand presenting end-users with the most relevant content to their preferences. In this work, user preferences are indicated by a set of keywords. A central server monitors the document stream and continuously reports to each user the top- @math documents that are most relevant to her keywords. Our objective is to support large numbers of users and high stream rates, while refreshing the top- @math results almost instantaneously. Our solution abandons the traditional frequency-ordered indexing approach. Instead, it follows an identifier-ordering paradigm that suits better the nature of the problem. When complemented with a novel, locally adaptive technique, our method offers (i) proven optimality w.r.t. the number of considered queries per stream event, and (ii) an order of magnitude shorter response time (i.e., time to refresh the query results) than the current state-of-the-art. In this article, we study the problem of efficient top-k disjunctive query processing in a huge microblog dataset. In terms of compact indexing, we categorize the keywords into rare terms and common terms based on inverse document frequency (idf) and propose tailored block-oriented organization to save memory consumption. In terms of fast searching, we classify the queries into three types based on term category and judiciously design an efficient search algorithm for each type. We conducted extensive experiments on a billion-scale Twitter dataset and examined the performance with both simple and more advanced ranking functions. The results showed that with much smaller index size, our search algorithm achieves a factor of 2--3 times faster speedup over state-of-the-art solutions in both ranking scenarios. Internet users are shifting from searching on traditional media to social network platforms (SNPs) to retrieve up-to-date and valuable information. SNPs have two unique characteristics: frequent content update and small world phenomenon. However, existing works are not able to support these two features simultaneously. To address this problem, we develop a general framework to enable real time personalized top-k query. Our framework is based on a general ranking function that incorporates time freshness, social relevance and textual similarity. To ensure efficient update and query processing, there are two key challenges. The first is to design an index structure that is update-friendly while supporting instant query processing. The second is to efficiently compute the social relevance in a complex graph. To address these challenges, we first design a novel 3D cube inverted index to support efficient pruning on the three dimensions simultaneously. Then we devise a cube based threshold algorithm to retrieve the top-k results, and propose several pruning techniques to optimize the social distance computation, whose cost dominates the query processing. Furthermore, we optimize the 3D index via a hierarchical partition method to enhance our pruning on the social dimension. Extensive experimental results on two real world large datasets demonstrate the efficiency and the robustness of our proposed solution. Real-time search dictates that new contents be made available for search immediately following their creation. From the database perspective, this requirement may be quite easily met by creating an up-to-date index for the contents and measuring search quality by the time gap between insertion time and availability of the index. This approach, however, poses new challenges for micro-blogging systems where thousands of concurrent users may upload their micro-blogs or tweets simultaneously. Due to the high update and query loads, conventional approaches would either fail to index the huge amount of newly created contents in real time or fall short of providing a scalable indexing service. In this paper, we propose a tweet index called the TI (Tweet Index), an adaptive indexing scheme for microblogging systems such as Twitter. The intuition of the TI is to index the tweets that may appear as a search result with high probability and delay indexing some other tweets. This strategy significantly reduces the indexing cost without compromising the quality of the search results. In the TI, we also devise a new ranking scheme by combining the relationship between the users and tweets. We group tweets into topics and update the ranking of a topic dynamically. The experiments on a real Twitter dataset confirm the efficiency of the TI. The web today is increasingly characterized by social and real-time signals, which we believe represent two frontiers in information retrieval. In this paper, we present Early bird, the core retrieval engine that powers Twitter's real-time search service. Although Early bird builds and maintains inverted indexes like nearly all modern retrieval engines, its index structures differ from those built to support traditional web search. We describe these differences and present the rationale behind our design. A key requirement of real-time search is the ability to ingest content rapidly and make it searchable immediately, while concurrently supporting low-latency, high-throughput query evaluation. These demands are met with a single-writer, multiple-reader concurrency model and the targeted use of memory barriers. Early bird represents a point in the design space of real-time search engines that has worked well for Twitter's needs. By sharing our experiences, we hope to spur additional interest and innovation in this exciting space. Indexing microblogs for real-time search is challenging given the efficiency issue caused by the tremendous speed at which new microblogs are created by users. Existing approaches address this efficiency issue at the cost of query accuracy, as they either (i) exclude a significant portion of microblogs from the index to reduce update cost or (ii) rank microblogs mostly by their timestamps (without sufficient consideration of their relevance to the queries) to enable append-only index insertion. As a consequence, the search results returned by the existing approaches do not satisfy the users who demand timely and high-quality search results. To remedy this deficiency, we propose the Log-Structured Inverted Indices (LSII), a structure for exact real-time search on microblogs. The core of LSII is a sequence of inverted indices with exponentially increasing sizes, such that new microblogs are (i) first inserted into the smallest index and (ii) later moved into the larger indices in a batch manner. The batch insertion mechanism leads to a small amortize update cost for each new microblog, without significantly degrading query performance. We present a comprehensive study on LSII, exploring various design options to strike a good balance between query and update performance. In addition, we propose extensions of LSII to support personalized search and to exploit multi-threading for performance improvement. Extensive experiments demonstrate the efficiency of LSII with experiments on real data. Massive amount of text data are being generated by a huge number of web users at an unprecedented scale. These data cover a wide range of topics. Users are interested in receiving a few up-to-date representative documents (e.g., tweets) that can provide them with a wide coverage of different aspects of their query topics. To address the problem, we consider the Diversity-Aware Top-k Subscription (DAS) query. Given a DAS query, we continuously maintain an up-to-date result set that contains k most recently returned documents over a text stream for the query. The DAS query takes into account text relevance, document recency, and result diversity. We propose a novel solution to efficiently processing a large number of DAS queries over a stream of documents. We demonstrate the efficiency of our approach on real-world dataset and the experimental results show that our solution is able to achieve a reduction of the processing time by 60--75 compared with two baselines. We also study the effectiveness of the DAS query.
Abstract of query paper
Cite abstracts
971
970
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
Social media advertising is a multi-billion dollar market and has become the major revenue source for Facebook and Twitter. To deliver ads to potentially interested users, these social network platforms learn a prediction model for each user based on their personal interests. However, as user interests often evolve slowly, the user may end up receiving repetitive ads. In this paper, we propose a context-aware advertising framework that takes into account the relatively static personal interests as well as the dynamic news feed from friends to drive growth in the ad click-through rate. To meet the real-time requirement, we first propose an online retrieval strategy that finds k most relevant ads matching the dynamic context when a read operation is triggered. To avoid frequent retrieval when the context varies little, we propose a safe region method to quickly determine whether the top-k ads of a user are changed. Finally, we propose a hybrid model to combine the merits of both methods by analyzing the dynamism of news feed to determine an appropriate retrieval strategy. Extensive experiments conducted on multiple real social networks and ad datasets verified the efficiency and robustness of our hybrid model. Many real applications in real-time news stream advertising call for efficient processing of long queries against short text. In such applications, dynamic news feeds are regarded as queries to match against an advertisement (ad) database for retrieving the k most relevant ads. The existing approaches to keyword retrieval cannot work well in this search scenario when queries are triggered at a very high frequency. To address the problem, we introduce new techniques to significantly improve search performance. First, we devise a two-level partitioning for tight upper bound estimation and a lazy evaluation scheme to delay full evaluation of unpromising candidates, which can bring three to four times performance boosting in a database with 7 million ads. Second, we propose a novel rank-aware block-oriented inverted index to further improve performance. In this index scheme, each entry in an inverted list is assigned a rank according to its importance in the ad. Then, we introduce a block-at-a-time search strategy based on the index scheme to support a much tighter upper bound estimation and a very early termination. We have conducted experiments with real datasets, and the results show that the rank-aware method can further improve performance by an order of magnitude.
Abstract of query paper
Cite abstracts
972
971
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach. Short-text messages such as tweets are being created and shared at an unprecedented rate. Tweets, in their raw form, while being informative, can also be overwhelming. For both end-users and data analysts, it is a nightmare to plow through millions of tweets which contain enormous amount of noise and redundancy. In this paper, we propose a novel continuous summarization framework called Sumblr to alleviate the problem. In contrast to the traditional document summarization methods which focus on static and small-scale data set, Sumblr is designed to deal with dynamic, fast arriving, and large-scale tweet streams. Our proposed framework consists of three major components. First, we propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics in a data structure called tweet cluster vector (TCV). Second, we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Third, we design an effective topic evolution detection method, which monitors summary-based volume-based variations to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our framework. Today's social platforms, such as Twitter and Facebook, continuously generate massive volumes of data. The resulting data streams exceed any reasonable limit for permanent storage, especially since data is often redundant, overlapping, sparse, and generally of low value. This calls for means to retain solely a small fraction of the data in an online manner. In this paper, we propose techniques to effectively decide which data to retain, such that the induced loss of information, the regret of neglecting certain data, is minimized. These techniques enable not only efficient processing of massive streaming data, but are also adaptive and address the dynamic nature of social media. Experiments on large-scale real-world datasets illustrate the feasibility of our approach in terms of both, runtime and information quality. We focus on the problem of selecting meaningful tweets given a user's interests; the dynamic nature of user interests, the sheer volume, and the sparseness of individual messages make this an challenging problem. Specifically, we consider the task of time-aware tweets summarization, based on a user's history and collaborative social influences from social circles.'' We propose a time-aware user behavior model, the Tweet Propagation Model (TPM), in which we infer dynamic probabilistic distributions over interests and topics. We then explicitly consider novelty, coverage, and diversity to arrive at an iterative optimization algorithm for selecting tweets. Experimental results validate the effectiveness of our personalized time-aware tweets summarization method based on TPM. Microblogging services have revolutionized the way people exchange information. Confronted with the ever-increasing numbers of social events and the corresponding microblogs with multimedia contents, it is desirable to provide visualized summaries to help users to quickly grasp the essence of these social events for better understanding. While existing approaches mostly focus only on text-based summary, microblog summarization with multiple media types (e.g., text, image, and video) is scarcely explored. In this paper, we propose a multimedia social event summarization framework to automatically generate visualized summaries from the microblog stream of multiple media types. Specifically, the proposed framework comprises three stages, as follows. 1) A noise removal approach is first devised to eliminate potentially noisy images. An effective spectral filtering model is exploited to estimate the probability that an image is relevant to a given event. 2) A novel cross-media probabilistic model, termed Cross-Media-LDA (CMLDA), is proposed to jointly discover subevents from microblogs of multiple media types. The intrinsic correlations among these different media types are well explored and exploited for reinforcing the cross-media subevent discovery process. 3) Finally, based on the cross-media knowledge of all the discovered subevents, a multimedia microblog summary generation process is designed to jointly identify both representative textual and visual samples, which are further aggregated to form a holistic visualized summary. We conduct extensive experiments on two real-world microblog datasets to demonstrate the superiority of the proposed framework as compared to the state-of-the-art approaches. The large amounts of data generated on microblogging services are making summarization challenging. Previous research has mostly focused on working in batches or with filtered streams. Input data has to be saved and analyzed several times, in order to detect underlying events and then summarize them. We improve the efficiency of this process by designing an online abstractive algorithm. Processing is done in a single pass, removing the need to save any input data and improving the running time. An online approach is also able to generate the summaries in real time, using the latest information. The algorithm we propose uses a word graph, along with optimization techniques such as decaying windows and pruning. It outperforms the baseline in terms of summary quality, as well as time and memory efficiency. With the dramatic growth of social media users, microblogs are created and shared at an unprecedented rate. The high velocity and large volumes of short text posts (microblogs) bring redundancies and noise, making it hard for users and analysts to elicit useful information. In this paper, we formalize the problem from a summarization angle – Continuous Summarization over Microblog Threads (CSMT), which considers three facets: information gain of the microblog dialogue, diversity, and temporal information. This summarization problem is different from the classic ones in two aspects: (i) It is considered over a large-scale, dynamic data with high updating frequency; (ii) the context between microblogs are taken into account. We first prove that the CSMT problem is NP-hard. Then we propose a greedy algorithm with ( (1-1 e )) performance guarantee. Finally we extend the greedy algorithm on the sliding window to continuously summarize microblogs for threads. Our experimental results on large-scale datasets show that our method is more superior than other two baselines in terms of summary diversity and information gain, with a close time cost to the best performed baseline. A viewpoint is a triple consisting of an entity, a topic related to this entity and sentiment towards this topic. In time-aware multi-viewpoint summarization one monitors viewpoints for a running topic and selects a small set of informative documents. In this paper, we focus on time-aware multi-viewpoint summarization of multilingual social text streams. Viewpoint drift, ambiguous entities and multilingual text make this a challenging task. Our approach includes three core ingredients: dynamic viewpoint modeling, cross-language viewpoint alignment, and, finally, multi-viewpoint summarization. Specifically, we propose a dynamic latent factor model to explicitly characterize a set of viewpoints through which entities, topics and sentiment labels during a time interval are derived jointly; we connect viewpoints in different languages by using an entity-based semantic similarity measure; and we employ an update viewpoint summarization strategy to generate a time-aware summary to reflect viewpoints. Experiments conducted on a real-world dataset demonstrate the effectiveness of our proposed method for time-aware multi-viewpoint summarization of multilingual social text streams.
Abstract of query paper
Cite abstracts
973
972
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of "submodularity". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions. LetN be a finite set andz be a real-valued function defined on the set of subsets ofN that satisfies z(S)+z(T)gez(SxcupT)+z(SxcapT) for allS, T inN. Such a function is called submodular. We consider the problem maxSsubN a(S):|S|leK,z(S) submodular . Several hard combinatorial optimization problems can be posed in this framework. For example, the problem of finding a maximum weight independent set in a matroid, when the elements of the matroid are colored and the elements of the independent set can have no more thanK colors, is in this class. The uncapacitated location problem is a special case of this matroid optimization problem. We analyze greedy and local improvement heuristics and a linear programming relaxation for this problem. Our results are worst case bounds on the quality of the approximations. For example, whenz(S) is nondecreasing andz(0) = 0, we show that a ldquogreedyrdquo heuristic always produces a solution whose value is at least 1 –[(K – 1) K] K times the optimal value. This bound can be achieved for eachK and has a limiting value of (e – 1) e, where e is the base of the natural logarithm. There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best known approximation guarantees, but with significantly improved running times, for maximizing a monotone submodular function f: 2[n] → R+ subject to various constraints. As in previous work, we measure the number of oracle calls to the objective function which is the dominating term in the running time. Our first result is a simple algorithm that gives a (1--1 e -- e)-approximation for a cardinality constraint using O(n e log n e) queries, and a 1 (p + 2e + 1 + e)-approximation for the intersection of a p-system and e knapsack (linear) constraints using O (n e2 log2 n e) queries. This is the first approximation for a p-system combined with linear constraints. (We also show that the factor of p cannot be improved for maximizing over a p-system.) The main idea behind these algorithms serves as a building block in our more sophisticated algorithms. Our main result is a new variant of the continuous greedy algorithm, which interpolates between the classical greedy algorithm and a truly continuous algorithm. We show how this algorithm can be implemented for matroid and knapsack constraints using O(n2) oracle calls to the objective function. (Previous variants and alternative techniques were known to use at least O(n4) oracle calls.) This leads to an O(n2 e4 log2 n e)-time (1--1 e -- e)-approximation for a matroid constraint. For a knapsack constraint, we develop a more involved (1--1 e -- e)-approximation algorithm that runs in time O(n2(1 e log n)poly(1 e)). Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds. Representative subset selection (RSS) is an important tool for users to draw insights from massive datasets. Existing literature models RSS as the submodular maximization problem to capture the “diminishing returns” property of the representativeness of selected subsets, but often only has a single constraint (e.g., cardinality), which limits its applications in many real-world problems. To capture the data recency issue and support different types of constraints, we formulate dynamic RSS in data streams as maximizing submodular functions subject to general @math d-knapsack constraints (SMDK) over sliding windows. We propose a KnapWindow framework (KW) for SMDK. KW utilizes the KnapStream algorithm (KS) for SMDK in append-only streams as a subroutine. It maintains a sequence of checkpoints and KS instances over the sliding window. Theoretically, KW is @math 1-ɛ1+d-approximate for SMDK. Furthermore, we propose a KnapWindowPlus framework (KW @math +) to improve upon KW. KW @math + builds an index SubKnapChk to manage the checkpoints and KS instances. SubKnapChk deletes a checkpoint whenever it can be approximated by its successors. By keeping much fewer checkpoints, KW @math + achieves higher efficiency than KW while still guaranteeing a @math 1-ɛ'2+2d-approximate solution for SMDK. Finally, we evaluate the efficiency and solution quality of KW and KW @math + in real-world datasets. The experimental results demonstrate that KW achieves more than two orders of magnitude speedups over the batch baseline and preserves high-quality solutions for SMDK over sliding windows. KW @math + further runs 5-10 times faster than KW while providing solutions with equivalent or even better utilities. How can one summarize a massive data set "on the fly", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of "representativeness" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1 2-e approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.
Abstract of query paper
Cite abstracts
974
973
We study the learnability of a class of compact operators known as Schatten--von Neumann operators. These operators between infinite-dimensional function spaces play a central role in a variety of applications in learning theory and inverse problems. We address the question of sample complexity of learning Schatten-von Neumann operators and provide an upper bound on the number of measurements required for the empirical risk minimizer to generalize with arbitrary precision and probability, as a function of class parameter @math . Our results give generalization guarantees for regression of infinite-dimensional signals from infinite-dimensional data. Next, we adapt the representer theorem of Abernethy to show that empirical risk minimization over an a priori infinite-dimensional, non-compact set, can be converted to a convex finite dimensional optimization problem over a compact set. In summary, the class of @math -Schatten--von Neumann operators is probably approximately correct (PAC)-learnable via a practical convex program for any @math .
We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators mapping a set of "users" to a set of possibly desired "objects". In particular, several recent low-rank type matrix-completion methods for CF are shown to be special cases of our proposed framework. Unlike existing regularization-based CF, our approach can be used to incorporate additional information such as attributes of the users objects---a feature currently lacking in existing regularization-based CF approaches---using popular and well-known kernel methods. We provide novel representer theorems that we use to develop new estimation methods. We then provide learning algorithms based on low-rank decompositions and test them on a standard CF data set. The experiments indicate the advantages of generalizing the existing regularization-based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach.
Abstract of query paper
Cite abstracts
975
974
Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32 * 32 or 128 * 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374 * 374 in PASCAL2.
Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions. In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
Abstract of query paper
Cite abstracts
976
975
Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32 * 32 or 128 * 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374 * 374 in PASCAL2.
In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or @math -way array. Decompositions of higher-order tensors (i.e., @math -way arrays with @math ) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Abstract of query paper
Cite abstracts
977
976
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy. We address the problem of algorithmic fairness: ensuring that sensitive variables do not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable. We derive both risk and fairness bounds that support the statistical consistency of our approach. We specify our approach to kernel methods and observe that the fairness requirement implies an orthogonality constraint which can be easily added to these methods. We further observe that for linear models the constraint translates into a simple data preprocessing step. Experiments indicate that the method is empirically effective and performs favorably against state-of-the-art approaches. We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. We enourage readers to consult the more complete manuscript on the arXiv.
Abstract of query paper
Cite abstracts
978
977
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
Algorithmic decision making process now affects many aspects of our lives. Standard tools for machine learning, such as classification and regression, are subject to the bias in data, and thus direct application of such off-the-shelf tools could lead to a specific group being unfairly discriminated. Removing sensitive attributes of data does not solve this problem because a can arise when non-sensitive attributes and sensitive attributes are correlated. Here, we study a fair machine learning algorithm that avoids such a disparate impact when making a decision. Inspired by the two-stage least squares method that is widely used in the field of economics, we propose a two-stage algorithm that removes bias in the training data. The proposed algorithm is conceptually simple. Unlike most of existing fair algorithms that are designed for classification tasks, the proposed method is able to (i) deal with regression tasks, (ii) combine explanatory attributes to remove reverse discrimination, and (iii) deal with numerical sensitive attributes. The performance and fairness of the proposed algorithm are evaluated in simulations with synthetic and real-world datasets.
Abstract of query paper
Cite abstracts
979
978
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints in kernel regression methods. More specifically, we focus on examining the incorporation of these constraints in decision tree regression when cast as a form of kernel regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that order of complexity of memory and computation is preserved for such models and bounds the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use. The potential lack of fairness in the outputs of machine learning algorithms has recently gained attention both within the research community as well as in society more broadly. Surprisingly, there is no prior work developing tree-induction algorithms for building fair decision trees or fair random forests. These methods have widespread popularity as they are one of the few to be simultaneously interpretable, non-linear, and easy-to-use. In this paper we develop, to our knowledge, the first technique for the induction of fair decision trees. We show that our "Fair Forest" retains the benefits of the tree-based approach, while providing both greater accuracy and fairness than other alternatives, for both "group fairness" and "individual fairness.'" We also introduce new measures for fairness which are able to handle multinomial and continues attributes as well as regression problems, as opposed to binary attributes and labels only. Finally, we demonstrate a new, more robust evaluation procedure for algorithms that considers the dataset in its entirety rather than only a specific protected attribute. We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems. These regularizers all enjoy convexity, permitting fast optimization, and they span the rang from notions of group fairness to strong individual fairness. By varying the weight on the fairness regularizer, we can compute the efficient frontier of the accuracy-fairness trade-off on any given dataset, and we measure the severity of this trade-off via a numerical quantity we call the Price of Fairness (PoF). The centerpiece of our results is an extensive comparative study of the PoF across six different datasets in which fairness is a primary consideration. We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness. In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models.
Abstract of query paper
Cite abstracts
980
979
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
Algorithmic decision making process now affects many aspects of our lives. Standard tools for machine learning, such as classification and regression, are subject to the bias in data, and thus direct application of such off-the-shelf tools could lead to a specific group being unfairly discriminated. Removing sensitive attributes of data does not solve this problem because a can arise when non-sensitive attributes and sensitive attributes are correlated. Here, we study a fair machine learning algorithm that avoids such a disparate impact when making a decision. Inspired by the two-stage least squares method that is widely used in the field of economics, we propose a two-stage algorithm that removes bias in the training data. The proposed algorithm is conceptually simple. Unlike most of existing fair algorithms that are designed for classification tasks, the proposed method is able to (i) deal with regression tasks, (ii) combine explanatory attributes to remove reverse discrimination, and (iii) deal with numerical sensitive attributes. The performance and fairness of the proposed algorithm are evaluated in simulations with synthetic and real-world datasets. Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy. New social and economic activities massively exploit big data and machine learning algorithms to do inference on people’s lives. Applications include automatic curricula evaluation, wage determination, and risk assessment for credits and loans. Recently, many governments and institutions have raised concerns about the lack of fairness, equity and ethics in machine learning to treat these problems. It has been shown that not including sensitive features that bias fairness, such as gender or race, is not enough to mitigate the discrimination when other related features are included. Instead, including fairness in the objective function has been shown to be more efficient.
Abstract of query paper
Cite abstracts
981
980
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
Object-oriented programming complicates the already difficult task of reverse engineering software, and is being used increasingly by malware authors. Unlike traditional procedural-style code, reverse engineers must understand the complex interactions between object-oriented methods and the shared data structures with which they operate on, a tedious manual process. In this paper, we present a static approach that uses symbolic execution and inter-procedural data flow analysis to discover object instances, data members, and methods of a common class. The key idea behind our work is to track the propagation and usage of a unique object instance reference, called a this pointer. Our goal is to help malware reverse engineers to understand how classes are laid out and to identify their methods. We have implemented our approach in a tool called ObJDIGGER, which produced encouraging results when validated on real-world malware samples.
Abstract of query paper
Cite abstracts
982
981
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
This paper presents a method for automatic reconstruction of polymorphic class hierarchies from the assembly code obtained by compiling a C++ program. If the program is compiled with run-time type information (RTTI), class hierarchy is reconstructed via analysis of RTTI structures. In case RTTI structures are missing in the assembly, a technique based on the analysis of virtual function tables, constructors and destructors is used. A tool for automatic reconstruction of polymorphic class hierarchies that implements the described technique is presented. This tool is implemented as a plug in for IDA Pro Interactive Disassembler. Experimental study of the tool is provided.
Abstract of query paper
Cite abstracts
983
982
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
With only the binary executable of a program, it is useful to discover the program's data structures and infer their syntactic and semantic definitions. Such knowledge is highly valuable in a variety of security and forensic applications. Although there exist efforts in program data structure inference, the existing solutions are not suitable for our targeted application scenarios. In this paper, we propose a reverse engineering technique to automatically reveal program data structures from binaries. Our technique, called REWARDS, is based on dynamic analysis. More specifically, each memory location accessed by the program is tagged with a timestamped type attribute. Following the program's runtime data flow, this attribute is propagated to other memory locations and registers that share the same type. During the propagation, a variable's type gets resolved if it is involved in a type-revealing execution point or type sink. More importantly, besides the forward type propagation, REWARDS involves a backward type resolution procedure where the types of some previously accessed variables get recursively resolved starting from a type sink. This procedure is constrained by the timestamps of relevant memory locations to disambiguate variables re-using the same memory location. In addition, REWARDS is able to reconstruct in-memory data structure layout based on the type information derived. We demonstrate that REWARDS provides unique benefits to two applications: memory image forensics and binary fuzzing for vulnerability discovery. A recurring problem in security is reverse engineering binary code to recover high-level language data abstractions and types. High-level programming languages have data abstractions such as buffers, structures, and local variables that all help programmers and program analyses reason about programs in a scalable manner. During compilation, these abstractions are removed as code is translated down to operations on registers and one globally addressed memory region. Reverse engineering consists of “undoing” the compilation to recover high-level information so that programmers, security professionals, and analyses can all more easily reason about the binary code. In this paper we develop novel techniques for reverse engineering data type abstractions from binary programs. At the heart of our approach is a novel type reconstruction system based upon binary code analysis. Our techniques and system can be applied as part of both static or dynamic analysis, thus are extensible to a large number of security settings. Our results on 87 programs show that TIE is both more accurate and more precise at recovering high-level types than existing mechanisms. Because writing computer programs is hard, computer programmers are taught to use encapsulation and modularity to hide complexity and reduce the potential for errors. Their programs will have a high-level, hierarchical structure that reflects their choice of internal abstractions. We designed and forged a system, Laika, that detects this structure in memory using Bayesian unsupervised learning. Because almost all programs use data structures, their memory images consist of many copies of a relatively small number of templates. Given a memory image, Laika can find both the data structures and their instantiations. We then used Laika to detect three common polymorphic botnets by comparing their data structures. Because it avoids their code polymorphism entirely, Laika is extremely accurate. Finally, we argue that writing a data structure polymorphic virus is likely to be considerably harder than writing a code polymorphic virus.
Abstract of query paper
Cite abstracts
984
983
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
High-level C++ source code abstractions such as classes and methods greatly assist human analysts and automated algorithms alike when analyzing C++ programs. Unfortunately, these abstractions are lost when compiling C++ source code, which impedes the understanding of C++ executables. In this paper, we propose a system, OOAnalyzer, that uses an innovative new design to statically recover detailed C++ abstractions from executables in a scalable manner. OOAnalyzer's design is motivated by the observation that many human analysts reason about C++ programs by recognizing simple patterns in binary code and then combining these findings using logical inference, domain knowledge, and intuition. We codify this approach by combining a lightweight symbolic analysis with a flexible Prolog-based reasoning system. Unlike most existing work, OOAnalyzer is able to recover both polymorphic and non-polymorphic C++ classes. We show in our evaluation that OOAnalyzer assigns over 78 of methods to the correct class on our test corpus, which includes both malware and real-world software such as Firefox and MySQL. These recovered abstractions can help analysts understand the behavior of C++ malware and cleanware, and can also improve the precision of program analyses on C++ executables.
Abstract of query paper
Cite abstracts
985
984
Differential privacy mechanisms that also make reconstruction of the data impossible come at a cost - a decrease in utility. In this paper, we tackle this problem by designing a private data release mechanism that makes reconstruction of the original data impossible and also preserves utility for a wide range of machine learning algorithms. We do so by combining the Johnson-Lindenstrauss (JL) transform with noise generated from a Laplace distribution. While the JL transform can itself provide privacy guarantees blocki2012johnson and make reconstruction impossible, we do not rely on its differential privacy properties and only utilize its ability to make reconstruction impossible. We present novel proofs to show that our mechanism is differentially private under single element changes as well as single row changes to any database. In order to show utility, we prove that our mechanism maintains pairwise distances between points in expectation and also show that its variance is proportional to the dimensionality of the subspace we project the data into. Finally, we experimentally show the utility of our mechanism by deploying it on the task of clustering.
The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it. Suppose that party A collects private information about its users, where each user's data is represented as a bit vector. Suppose that party B has a proprietary data mining algorithm that requires estimating the distance between users, such as clustering or nearest neighbors. We ask if it is possible for party A to publish some information about each user so that B can estimate the distance between users without being able to infer any private bit of a user. Our method involves projecting each user's representation into a random, lower-dimensional space via a sparse Johnson-Lindenstrauss transform and then adding Gaussian noise to each entry of the lower-dimensional representation. We show that the method preserves differential privacy---where the more privacy is desired, the larger the variance of the Gaussian noise. Further, we show how to approximate the true distances between users via only the lower-dimensional, perturbed data. Finally, we consider other perturbation methods such as randomized response and draw comparisons to sketch-based methods. While the goal of releasing user-specific data to third parties is more broad than preserving distances, this work shows that distance computations with privacy is an achievable goal.
Abstract of query paper
Cite abstracts
986
985
Differential privacy mechanisms that also make reconstruction of the data impossible come at a cost - a decrease in utility. In this paper, we tackle this problem by designing a private data release mechanism that makes reconstruction of the original data impossible and also preserves utility for a wide range of machine learning algorithms. We do so by combining the Johnson-Lindenstrauss (JL) transform with noise generated from a Laplace distribution. While the JL transform can itself provide privacy guarantees blocki2012johnson and make reconstruction impossible, we do not rely on its differential privacy properties and only utilize its ability to make reconstruction impossible. We present novel proofs to show that our mechanism is differentially private under single element changes as well as single row changes to any database. In order to show utility, we prove that our mechanism maintains pairwise distances between points in expectation and also show that its variance is proportional to the dimensionality of the subspace we project the data into. Finally, we experimentally show the utility of our mechanism by deploying it on the task of clustering.
This work studies formal utility and privacy guarantees for a simple multiplicative database transformation, where the data are compressed by a random linear or affine transformation, reducing the number of data records substantially, while preserving the number of original input variables. We provide an analysis framework inspired by a recent concept known as differential privacy (Dwork 06). Our goal is to show that, despite the general difficulty of achieving the differential privacy guarantee, it is possible to publish synthetic data that are useful for a number of common statistical learning applications. This includes high dimensional sparse regression ( 07), principal component analysis (PCA), and other statistical measures ( 06) based on the covariance of the initial data. This paper proves that an "old dog", namely -- the classical Johnson-Lindenstrauss transform, "performs new tricks" -- it gives a novel way of preserving differential privacy. We show that if we take two databases, @math and @math , such that (i) @math is a rank-1 matrix of bounded norm and (ii) all singular values of @math and @math are sufficiently large, then multiplying either @math or @math with a vector of iid normal Gaussians yields two statistically close distributions in the sense of differential privacy. Furthermore, a small, deterministic and alteration of the input is enough to assert that all singular values of @math are large. We apply the Johnson-Lindenstrauss transform to the task of approximating cut-queries: the number of edges crossing a @math -cut in a graph. We show that the JL transform allows us to that preserves edge differential privacy (where two graphs are neighbors if they differ on a single edge) while adding only @math random noise to any given query (w.h.p). Comparing the additive noise of our algorithm to existing algorithms for answering cut-queries in a differentially private manner, we outperform all others on small cuts ( @math ). We also apply our technique to the task of estimating the variance of a given matrix in any given direction. The JL transform allows us to that preserves differential privacy w.r.t bounded changes (each row in the matrix can change by at most a norm-1 vector) while adding random noise of magnitude independent of the size of the matrix (w.h.p). In contrast, existing algorithms introduce an error which depends on the matrix dimensions.
Abstract of query paper
Cite abstracts
987
986
Rapid progress in genomics has enabled a thriving market for direct-to-consumer' genetic testing, whereby people have access to their genetic information without the involvement of a healthcare provider. Companies like 23andMe and AncestryDNA, which provide affordable health, genealogy, and ancestry reports, have already tested tens of millions of customers. At the same time, alas, far-right groups have also taken an interest in genetic testing, using them to attack minorities and prove their genetic purity.' However, the relation between genetic testing and online hate has not really been studied by the scientific community. To address this gap, we present a measurement study shedding light on how genetic testing is discussed on Web communities in Reddit and 4chan. We collect 1.3M comments posted over 27 months using a set of 280 keywords related to genetic testing. We then use Latent Dirichlet Allocation, Google's Perspective API, Perceptual Hashing, and word embeddings to identify trends, themes, and topics of discussion. Our analysis shows that genetic testing is discussed frequently on Reddit and 4chan, and often includes highly toxic language expressed through hateful, racist, and misogynistic comments. In particular, on 4chan's politically incorrect board ( pol ), content from genetic testing conversations involves several alt-right personalities and openly antisemitic memes. Finally, we find that genetic testing appears in a few unexpected contexts, and that users seem to build groups ranging from technology enthusiasts to communities using it to promote fringe political views.
Background Genetic testing for risk of hereditary cancer can help patients to make important decisions about prevention or early detection. US and UK studies show that people from ethnic minority groups are less likely to receive genetic testing. It is important to understand various groups’ awareness of genetic testing and its acceptability to avoid further disparities in health care. This review aims to identify and detail awareness, knowledge, perceptions, and attitudes towards genetic counselling testing for cancer risk prediction in ethnic minority groups. Background Aims: Direct-to-consumer (DTC) genetic tests have facilitated easy access to personal genetic information related to health and nutrition; however, con This study examines the way direct-to-consumer genetic testing (DTCGT) companies communicate privacy information and how consumers understand privacy implications of DTCGT. We first conducted an analysis of DTCGT websites to determine what information they provide regarding the treatment of consumer information and samples. 86 companies offered DTCGT services that could be purchased online from Canada. We then surveyed 415 consumers (180 had purchased, 235 considered but did not purchase DTCGT). While most websites had some privacy information, few provided sufficient information for consumers to make informed purchase decisions. Nearly half of participants reported reading the company’s privacy policy and many felt they received enough information about privacy implications, but their expectations were generally not consistent with company practices. The most common expectation was that the company would share results only with them and destroy their sample after testing. We discuss these issues regardin... Direct-to-consumer (DTC) genetic testing has attracted a great amount of attention from policy makers, the scientific community, professional groups, and the media. Although it is unclear what the public demand is for these services, there does appear to be public interest in personal genetic risk information. As a result, many commentators have raised a variety of social, ethical, and regulatory issues associated with this emerging industry, including privacy issues, ensuring that DTC companies provide accurate information about the risks and limitations of their services, the possible adverse impact of DTC genetic testing on healthcare systems, and concern about how individuals may interpret and react to genetic risk information. Concerns about genetic privacy affect individuals’ willingness to accept genetic testing in clinical care and to participate in genomics research. To learn what is already known about these views, we conducted a systematic review, which ultimately analyzed 53 studies involving the perspectives of 47,974 participants on real or hypothetical privacy issues related to human genetic data. Bibliographic databases included MEDLINE, Web of Knowledge, and Sociological Abstracts. Three investigators independently screened studies against predetermined criteria and assessed risk of bias. The picture of genetic privacy that emerges from this systematic literature review is complex and riddled with gaps. When asked specifically “are you worried about genetic privacy,” the general public, patients, and professionals frequently said yes. In many cases, however, that question was posed poorly or only in the most general terms. While many participants expressed concern that genomic and medical information would be revealed to others, respondents frequently seemed to conflate privacy, confidentiality, control, and security. People varied widely in how much control they wanted over the use of data. They were more concerned about use by employers, insurers, and the government than they were about researchers and commercial entities. In addition, people are often willing to give up some privacy to obtain other goods. Importantly, little attention was paid to understanding the factors–sociocultural, relational, and media—that influence people’s opinions and decisions. Future investigations should explore in greater depth which concerns about genetic privacy are most salient to people and the social forces and contexts that influence those perceptions. It is also critical to identify the social practices that will make the collection and use of these data more trustworthy for participants as well as to identify the circumstances that lead people to set aside worries and decide to participate in research. To describe consumers' perceptions of genetic counseling services in the context of direct-to-consumer personal genomic testing is the purpose of this research. Utilizing data from the Scripps Genomic Health Initiative, we assessed direct-to-consumer genomic test consumers' utilization and perceptions of genetic counseling services. At long-term follow-up, approximately 14 months post-testing, participants were asked to respond to several items gauging their interactions, if any, with a Navigenics genetic counselor, and their perceptions of those interactions. Out of 1325 individuals who completed long-term follow-up, 187 (14.1 ) indicated that they had spoken with a genetic counselor. The most commonly given reason for not utilizing the counseling service was a lack of need due to the perception of already understanding one's results (55.6 ). The most common reasons for utilizing the service included wanting to take advantage of a free service (43.9 ) and wanting more information on risk calculations (42.2 ). Among those who utilized the service, a large fraction reported that counseling improved their understanding of their results (54.5 ) and genetics in general (43.9 ). A relatively small proportion of participants utilized genetic counseling after direct-to-consumer personal genomic testing. Among those individuals who did utilize the service, however, a large fraction perceived it to be informative, and thus presumably beneficial.
Abstract of query paper
Cite abstracts
988
987
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by , adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to models. We experimentally show attacks that are able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and, in the other context, show how to recover recognizable images of people's faces given only their name and access to the ML model. We also initiate experimental exploration of natural countermeasures, investigating a privacy-aware decision tree training algorithm that is a simple variant of CART learning, as well as revealing only rounded confidence values. The lesson that emerges is that one can avoid these kinds of MI attacks with negligible degradation to utility. We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
Abstract of query paper
Cite abstracts
989
988
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
We examine the information-theoretic foundations of the increasingly popular notion of differential privacy. We establish a connection between differential private mechanisms and the rate-distortion framework. Additionally, we also show how differentially private distributions arise out of the application of the Maximum Entropy Principle. This helps us locate differential privacy within the wider framework of information-theory and helps formalize some intuitive aspects of our understanding of differential privacy. We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive. We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement. "Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy". This paper investigates the relation between three different notions of privacy: identifiability, differential privacy, and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the expected Hamming distance between the input and output databases, we establish some fundamental connections between these three privacy notions. Given a maximum allowable distortion @math , we define the privacy-distortion functions @math , @math , and @math to be the smallest (most private best) identifiability level, differential privacy level, and mutual information between the input and the output, respectively. We characterize @math and @math , and prove that @math for @math within certain range, where @math is a constant determined by the prior distribution of the original database @math , and diminishes to zero when @math is uniformly distributed. Furthermore, we show that @math and @math can be achieved by the same mechanism for @math within certain range, i.e., there is a mechanism that simultaneously minimizes the identifiability level and achieves the best mutual-information privacy. Based on these two connections, we prove that this mutual-information optimal mechanism satisfies @math -differential privacy with @math . The results in this paper reveal some consistency between two worst case notions of privacy, namely, identifiability and differential privacy, and an average notion of privacy, mutual-information privacy. We introduce Concentrated Differential Privacy, a relaxation of Differential Privacy enjoying better accuracy than both pure differential privacy and its popular "(epsilon,delta)" relaxation without compromising on cumulative privacy loss over multiple computations.
Abstract of query paper
Cite abstracts
990
989
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
We examine the information-theoretic foundations of the increasingly popular notion of differential privacy. We establish a connection between differential private mechanisms and the rate-distortion framework. Additionally, we also show how differentially private distributions arise out of the application of the Maximum Entropy Principle. This helps us locate differential privacy within the wider framework of information-theory and helps formalize some intuitive aspects of our understanding of differential privacy.
Abstract of query paper
Cite abstracts
991
990
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
With the newly proposed privacy definition of Renyi Differential Privacy (RDP) in (Mironov, 2017), we re-examine the inherent privacy of releasing a single sample from a posterior distribution. We exploit the impact of the prior distribution in mitigating the influence of individual data points. In particular, we focus on sampling from an exponential family and specific generalized linear models, such as logistic regression. We propose novel RDP mechanisms as well as offering a new RDP analysis for an existing method in order to add value to the RDP framework. Each method is capable of achieving arbitrary RDP privacy guarantees, and we offer experimental results of their efficacy.
Abstract of query paper
Cite abstracts
992
991
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy", a cryptographic approach to protect individual-level privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets. We examine the robustness and privacy of Bayesian inference, under assumptions on the prior, and with no modifications to the Bayesian framework. First, we generalise the concept of differential privacy to arbitrary dataset distances, outcome spaces and distribution families. We then prove bounds on the robustness of the posterior, introduce a posterior sampling mechanism, show that it is differentially private and provide finite sample bounds for distinguishability-based privacy under a strong adversarial model. Finally, we give examples satisfying our assumptions. The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a "student" model the knowledge of an ensemble of "teacher" models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers' answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy ( @math < 1.0). Many machine learning applications are based on data collected from people, such as their tastes and behaviour as well as biological traits and genetic data. Regardless of how important the application might be, one has to make sure individuals' identities or the privacy of the data are not compromised in the analysis. Differential privacy constitutes a powerful framework that prevents breaching of data subject privacy from the output of a computation. Differentially private versions of many important Bayesian inference methods have been proposed, but there is a lack of an efficient unified approach applicable to arbitrary models. In this contribution, we propose a differentially private variational inference method with a very wide applicability. It is built on top of doubly stochastic variational inference, a recent advance which provides a variational solution to a large class of models. We add differential privacy into doubly stochastic variational inference by clipping and perturbing the gradients. The algorithm is made more efficient through privacy amplification from subsampling. We demonstrate the method can reach an accuracy close to non-private level under reasonably strong privacy guarantees, clearly improving over previous sampling-based alternatives especially in the strong privacy regime. Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Differential privacy formalises privacy-preserving mechanisms that provide access to a database. Can Bayesian inference be used directly to provide private access to data? The answer is yes: under certain conditions on the prior, sampling from the posterior distribution can lead to a desired level of privacy and utility. For a uniform treatment, we define differential privacy over arbitrary data set metrics, outcome spaces and distribution families. This allows us to also deal with non-i.i.d or non-tabular data sets. We then prove bounds on the sensitivity of the posterior to the data, which delivers a measure of robustness. We also show how to use posterior sampling to provide differentially private responses to queries, within a decision-theoretic framework. Finally, we provide bounds on the utility of answers to queries and on the ability of an adversary to distinguish between data sets. The latter are complemented by a novel use of Le Cam's method to obtain lower bounds on distinguishability. Our results hold for arbitrary metrics, including those for the common definition of differential privacy. For specific choices of the metric, we give a number of examples satisfying our assumptions. Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as "teachers" for a "student" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
Abstract of query paper
Cite abstracts
993
992
We study the problem of identifying different behaviors occurring in different parts of a large heterogenous network. We zoom in to the network using lenses of different sizes to capture the local structure of the network. These network signatures are then weighted to provide a set of predicted labels for every node. We achieve a peak accuracy of @math (random= @math ) on two networks with @math and @math nodes each. Further, we perform better than random even when the given node is connected to up to 5 different types of networks. Finally, we perform this analysis on homogeneous networks and show that highly structured networks have high homogeneity.
We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our conclusions from multiple datasets are that 1. deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and, 2. pure transfer learning works effectively with minimum interference from the user and is robust against small data. We study a natural problem: Given a small piece of a large parent network, is it possible to identify the parent network? We approach this problem from two perspectives. First, using several "sophisticated" or "classical" network features that have been developed over decades of social network study. These features measure aggregate properties of the network and have been found to take on distinctive values for different types of network, at the large scale. By using these classical features within a standard machine learning framework, we show that one can identify large parent networks from small (even 8-node) subgraphs. Second, we present a novel adjacency matrix embedding technique which converts the small piece of the network into an image and, within a deep learning framework, we are able to obtain prediction accuracies upward of 80 , which is comparable to or slightly better than the performance from classical features. Our approach provides a new tool for topology-based prediction which may be of interest in other network settings. Our approach is plug and play, and can be used by non-domain experts. It is an appealing alternative to the often arduous task of creating domain specific features using domain expertise. Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ( @math 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
Abstract of query paper
Cite abstracts
994
993
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
In this paper we introduce a statistical method to build two-dimensional gas distribution maps (Kernel DM+V W algorithm). In addition to gas sensor measurements, the proposed method also takes into account wind information by modeling the information content of the gas sensor measurements as a bivariate Gaussian kernel whose shape depends on the measured wind vector. We evaluate the method based on real measurements in an outdoor environment obtained with a mobile robot that was equipped with gas sensors and an ultrasonic anemometer for wind measurements. As a measure of the model quality we compute how well unseen measurements are predicted in terms of the data likelihood. The initial results are encouraging and show a clear improvement of the proposed method compared to the case where wind is not considered. A Bakman Technologies PB7220-2000-T frequency-domain THz spectrometer is modified to allow it to operate autonomously while attached to a DJI-S1000 consumer drone. The THz spectrum of atmospheric water vapor is recorded during an 8 min flight 10 m above the ground at Paramount Ranch Park, Agoura Hills, CA, USA. The spectrum is then compared to theory and the relative humidity determined. In this paper, we consider the problem of learning a two dimensional spatial model of a gas distribution with a mobile robot. Building maps that can be used to accurately predict the gas concentration at query locations is a challenging task due to the chaotic nature of gas dispersal. We present an approach that formulates this task as a regression problem. To deal with the specific properties of typical gas distributions, we propose a sparse Gaussian process mixture model. This allows us to accurately represent the smooth background signal as well as areas of high concentration. We integrate the sparsification of the training data into an EM procedure used for learning the mixture components and the gating function. Our approach has been implemented and tested using datasets recorded with a real mobile robot equipped with an electronic nose. We demonstrate that our models are well suited for predicting gas concentrations at new query locations and that they outperform alternative methods used in robotics to carry out in this task. Abstract Understanding atmospheric transport and dispersal events has an important role in a range of scenarios. Of particular importance is aiding in emergency response after an intentional or accidental chemical, biological or radiological (CBR) release. In the event of a CBR release, it is desirable to know the current and future spatial extent of the contaminant as well as its location in order to aid decision makers in emergency response. Many dispersion phenomena may be opaque or clear, thus monitoring them using visual methods will be difficult or impossible. In these scenarios, relevant concentration sensors are required to detect the substance where they can form a static network on the ground or be placed upon mobile platforms. This paper presents a review of techniques used to gain information about atmospheric dispersion events using static or mobile sensors. The review is concluded with a discussion on the current limitations of the state of the art and recommendations for future research.
Abstract of query paper
Cite abstracts
995
994
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
Autonomous AI systems need complex computational techniques for planning and performing actions. Planning and acting require significant deliberation because an intelligent system must coordinate and integrate these activities in order to act effectively in the real world. This book presents a comprehensive paradigm of planning and acting using the most recent and advanced automated-planning techniques. It explains the computational deliberation capabilities that allow an actor, whether physical or virtual, to reason about its actions, choose them, organize them purposefully, and act deliberately to achieve an objective. Useful for students, practitioners, and researchers, this book covers state-of-the-art planning techniques, acting techniques, and their integration which will allow readers to design intelligent systems that are able to act effectively in the real world. Many modern robotics applications require robots to function autonomously in dynamic environments including other decision making agents, such as people or other robots. This calls for fast and scalable interactive motion planning. This requires models that take into consideration the other agent's intended actions in one's own planning. We present a real-time motion planning framework that brings together a few key components including intention inference by reasoning counterfactually about potential motion of the other agents as they work towards different goals. By using a light-weight motion model, we achieve efficient iterative planning for fluid motion when avoiding pedestrians, in parallel with goal inference for longer range movement prediction. This inference framework is coupled with a novel distributed visual tracking method that provides reliable and robust models for the current belief-state of the monitored environment. This combined approach represents a computationally efficient alternative to previously studied policy learning methods that often require significant offline training or calibration and do not yet scale to densely populated environments. We validate this framework with experiments involving multi-robot and human-robot navigation. We further validate the tracker component separately on much larger scale unconstrained pedestrian data sets. Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
Abstract of query paper
Cite abstracts
996
995
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
In fluid simulation, enforcing incompressibility is crucial for realism; it is also computationally expensive. Recent work has improved efficiency, but still requires time-steps that are impractical for real-time applications. In this work we present an iterative density solver integrated into the Position Based Dynamics framework (PBD). By formulating and solving a set of positional constraints that enforce constant density, our method allows similar incompressibility and convergence to modern smoothed particle hydro-dynamic (SPH) solvers, but inherits the stability of the geometric, position based dynamics method, allowing large time steps suitable for real-time applications. We incorporate an artificial pressure term that improves particle distribution, creates surface tension, and lowers the neighborhood requirements of traditional SPH. Finally, we address the issue of energy loss by applying vorticity confinement as a velocity post process. We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications. This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic enviro ...
Abstract of query paper
Cite abstracts
997
996
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
In this paper, a fuzzy model is suggested for the prediction of wind speed and the produced electrical power at a wind park. The model is trained using a genetic algorithm-based learning scheme. The training set includes wind speed and direction data, measured at neighboring sites up to 30 km away from the wind turbine clusters. Extensive simulation results are shown for two application cases, providing wind speed forecasts from 30 min to 2 h ahead. It is demonstrated that the suggested model achieves an adequate understanding of the problem while it exhibits significant improvement compared to the persistent method. In this paper we present a simple and rapid implementation of a fluid dynamics solver for game engines. Our tools can greatly enhance games by providing realistic fluid-like effects such as swirling smoke past a moving character. The potential applications are endless. Our algorithms are based on the physical equations of fluid flow, namely the Navier-Stokes equations. These equations are notoriously hard to solve when strict physical accuracy is of prime importance. Our solvers on the other hand are geared towards visual quality. Our emphasis is on stability and speed, which means that our simulations can be advanced with arbitrary time steps. We also demonstrate that our solvers are easy to code by providing a complete C code implementation in this paper. Our algorithms run in real-time for reasonable grid sizes in both two and three dimensions on standard PC hardware, as demonstrated during the presentation of this paper at the conference.
Abstract of query paper
Cite abstracts
998
997
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
Gaining information about an unknown gas source is a task of great importance with applications in several areas including: responding to gas leaks or suspicious smells, quantifying sources of emissions, or in an emergency response to an industrial accident or act of terrorism. In this paper, a method to estimate the source term of a gaseous release using measurements of concentration obtained from an unmanned aerial vehicle (UAV) is described. The source term parameters estimated include the three dimensional location of the release, its emission rate, and other important variables needed to forecast the spread of the gas using an atmospheric transport and dispersion model. The parameters of the source are estimated by fusing concentration observations from a gas detector on-board the aircraft, with meteorological data and an appropriate model of dispersion. Two models are compared in this paper, both derived from analytical solutions to the advection diffusion equation. Bayes’ theorem, implemented using a sequential Monte Carlo algorithm, is used to estimate the source parameters in order to take into account the large uncertainties in the observations and formulated models. The system is verified with novel, outdoor, fully automated experiments, where observations from the UAV are used to estimate the parameters of a diffusive source. The estimation performance of the algorithm is assessed subject to various flight path configurations and wind speeds. Observations and lessons learned during these unique experiments are discussed and areas for future research are identified. Finding the location and strength of an unknown hazardous release is of paramount importance in emergency response and environmental monitoring, thus it has been an active research area for several years known as source term estimation. This paper presents a joint Bayesian estimation and planning algorithm to guide a mobile robot to collect informative measurements, allowing the source parameters to be estimated quickly and accurately. The estimation is performed recursively using Bayes’ theorem, where uncertainties in the meteorological and dispersion parameters are considered and the intermittent readings from a low-cost gas sensor are addressed by a novel likelihood function. The planning strategy is designed to maximize the expected utility function based on the estimated information gain of the source parameters. Subsequently, this paper presents the first experimental result of such a system in turbulent, diffusive conditions, in which a ground robot equipped with a low-cost gas sensor responds to the hazardous source stimulated by incense sticks. The experimental results demonstrate the effectiveness of the proposed estimation and search algorithm for source term estimation based on a mobile robot and a low-cost sensor. The design of multiple experiments is commonly undertaken via suboptimal strategies, such as batch (open-loop) design that omits feedback or greedy (myopic) design that does not account for future effects. This paper introduces new strategies for the optimal design of sequential experiments. First, we rigorously formulate the general sequential optimal experimental design (sOED) problem as a dynamic program. Batch and greedy designs are shown to result from special cases of this formulation. We then focus on sOED for parameter inference, adopting a Bayesian formulation with an information theoretic design objective. To make the problem tractable, we develop new numerical approaches for nonlinear design with continuous parameter, design, and observation spaces. We approximate the optimal policy by using backward induction with regression to construct and refine value function approximations in the dynamic program. The proposed algorithm iteratively generates trajectories via exploration and exploitation to improve approximation accuracy in frequently visited regions of the state space. Numerical results are verified against analytical solutions in a linear-Gaussian setting. Advantages over batch and greedy design are then demonstrated on a nonlinear source inversion problem where we seek an optimal policy for sequential sensing.
Abstract of query paper
Cite abstracts
999
998
While the success of deep neural networks (DNNs) is well-established across a variety of domains, our ability to explain and interpret these methods is limited. Unlike previously proposed local methods which try to explain particular classification decisions, we focus on global interpretability and ask a universally applicable question: given a trained model, which features are the most important? In the context of neural networks, a feature is rarely important on its own, so our strategy is specifically designed to leverage partial covariance structures and incorporate variable dependence into feature ranking. Our methodological contributions in this paper are two-fold. First, we propose an effect size analogue for DNNs that is appropriate for applications with highly collinear predictors (ubiquitous in computer vision). Second, we extend the recently proposed "RelATive cEntrality" (RATE) measure (, 2019) to the Bayesian deep learning setting. RATE applies an information theoretic criterion to the posterior distribution of effect sizes to assess feature significance. We apply our framework to three broad application areas: computer vision, natural language processing, and social science.
Falling rule lists are classification models consisting of an ordered list of if-then rules, where (i) the order of rules determines which example should be classified by each rule, and (ii) the estimated probability of success decreases monotonically down the list. These kinds of rule lists are inspired by healthcare applications where patients would be stratified into risk sets and the highest at-risk patients should be considered first. We provide a Bayesian framework for learning falling rule lists that does not rely on traditional greedy decision tree learning methods. Deep neural networks have proved to be a very effective way to perform classification tasks. They excel when the input data is high dimensional, the relationship between the input and the output is complicated, and the number of labeled training examples is large. But it is hard to explain why a learned network makes a particular classification decision on a particular test case. This is due to their reliance on distributed hierarchical representations. If we could take the knowledge acquired by the neural net and express the same knowledge in a model that relies on hierarchical decisions instead, explaining a particular decision would be much easier. We describe a way of using a trained neural net to create a type of soft decision tree that generalizes better than one learned directly from the training data. Multiple genes, gene-by-gene interactions, and gene-by-environment interactions are believed to underlie most complex diseases. However, such interactions are difficult to identify. Although there have been recent successes in identifying genetic variants for complex diseases, it still remains difficult to identify gene–gene and gene–environment interactions. To overcome this difficulty, we propose a forest-based approach and a concept of variable importance. The proposed approach is demonstrated by simulation study for its validity and illustrated by a real data analysis for its use. Analyses of both real data and simulated data based on published genetic models show the effectiveness of our approach. For example, our analysis of a published data set on age-related macular degeneration (AMD) not only confirmed a known genetic variant (P value = 2E-6) for AMD, but also revealed an unreported haplotype surrounding single-nucleotide polymorphism (SNP) rs10272438 on chromosome 7 that was significantly associated with AMD (P value = 0.0024). These significance levels are obtained after the consideration for a large number of SNPs. Thus, the importance of this work is twofold: it proposes a powerful and flexible method to identify high-risk haplotypes and their interactions and reveals a potentially protective variant for AMD.
Abstract of query paper
Cite abstracts
1000
999
Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER.
Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.
Abstract of query paper
Cite abstracts