aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.09189
2963689651
This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium.
The problem of incomplete information in multiagent learning, in the form of the ad hoc team problem, was addressed by @cite_20 . They propose a procedure to evaluate two ad hoc agents for a given set of potential team members and tasks. We used a modified version of this procedure for our own experiments (see ).
{ "cite_N": [ "@cite_20" ], "mid": [ "1606056663" ], "abstract": [ "As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This paper challenges the AI community to develop theory and to implement prototypes of ad hoc team agents. It defines the concept of ad hoc team agents, specifies an evaluation paradigm, and provides examples of possible theoretical and empirical approaches to challenge. The goal is to encourage progress towards this ambitious, newly realistic, and increasingly important research goal." ] }
1907.09189
2963689651
This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium.
In earlier work, Stone and Kraus @cite_14 define optimal strategies for an ad hoc agent collaborating with a fixed-behaviour teammate in an environment modelled as a @math -armed bandit. @cite_32 present an algorithm that would lead a fixed greedy agent towards an optimal joint action in a simple repeated game in which both agents have identical payoff functions.
{ "cite_N": [ "@cite_14", "@cite_32" ], "mid": [ "1548012162", "2141088582" ], "abstract": [ "In typical multiagent teamwork settings, the teammates are either programmed together, or are otherwise provided with standard communication languages and coordination protocols. In contrast, this paper presents an ad hoc team setting in which the teammates are not pre-coordinated, yet still must work together in order to achieve their common goal(s). We represent a specific instance of this scenario, in which a teammate has limited action capabilities and a fixed and known behavior, as a finite-horizon, cooperative k-armed bandit. In addition to motivating and studying this novel ad hoc teamwork scenario, the paper contributes to the k-armed bandits literature by characterizing the conditions under which certain actions are potentially optimal, and by presenting a polynomial dynamic programming algorithm that solves for the optimal action when the arm payoffs come from a discrete distribution.", "Teams of agents may not always be developed in a planned, coordi- nated fashion. Rather, as deployed agents become more common in e-commerce and other settings, there are increasing opportunities for previously unacquainted agents to cooperate in ad hoc team settings. In such scenarios, it is useful for indi- vidual agents to be able to collaborate with a wide variety of possible teammates under the philosophy that not all agents are fully rational. This paper considers an agent that is to interact repeatedly with a teammate that will adapt to this in- teraction in a particular suboptimal, but natural way. We formalize this setting in game-theoretic terms, provide and analyze a fully-implemented algorithm for finding optimal action sequences, prove some theoretical results pertaining to the lengths of these action sequences, and provide empirical results pertaining to the prevalence of our problem of interest in random interaction settings." ] }
1907.09189
2963689651
This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium.
These assumptions are relaxed in a recent empirical study by @cite_28 . They used an ad hoc agent that tries to identify its teammates by observing their behaviour and comparing it with a database of known behaviours. In addition, it learns a new model for the observed behaviour using a tree classifier. The agent combines both the database and the learned model in a Bayesian fashion to anticipate the behaviour of its teammates. Experiments showed that the ad hoc agent performed quite well, and in general better than those agents that just mimic their teammates.
{ "cite_N": [ "@cite_28" ], "mid": [ "2139993574" ], "abstract": [ "The concept of creating autonomous agents capable of exhibiting ad hoc teamwork was recently introduced as a challenge to the AI, and specifically to the multiagent systems community. An agent capable of ad hoc teamwork is one that can effectively cooperate with multiple potential teammates on a set of collaborative tasks. Previous research has investigated theoretically optimal ad hoc teamwork strategies in restrictive settings. This paper presents the first empirical study of ad hoc teamwork in a more open, complex teamwork domain. Specifically, we evaluate a range of effective algorithms for on-line behavior generation on the part of a single ad hoc team agent that must collaborate with a range of possible teammates in the pursuit domain." ] }
1907.09189
2963689651
This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium.
@cite_13 proposed an interesting algorithm called (OPAT). For each encountered state, the algorithm estimates the values of all joint actions using Monte-Carlo Tree Search. These values are used to generate a stage game (i.e. a repeated game with one repetition), based on which the algorithm decides which action to take. The decision process considers the past @math plays of the current stage game to approximate the strategies of the other agents. OPAT was shown to be effective in a series of multiagent domains.
{ "cite_N": [ "@cite_13" ], "mid": [ "160961942" ], "abstract": [ "We propose a novel online planning algorithm for ad hoc team settings--challenging situations in which an agent must collaborate with unknown teammates without prior coordination. Our approach is based on constructing and solving a series of stage games, and then using biased adaptive play to choose actions. The utility function in each stage game is estimated via Monte-Carlo tree search using the UCT algorithm. We establish analytically the convergence of the algorithm and show that it performs well in a variety of ad hoc team domains." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
Early results on assigning infrastructure network resources to virtual network components may be found, , in @cite_22 @cite_17 . Due to its capability of sharing efficiently network resource in 5G networks, the concept of network virtualization has gained renewed attention in the literature @cite_13 @cite_15 @cite_21 @cite_39 via the concept of network slicing.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_39", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2142547489", "2588061367", "2793519446", "2612074600", "2605961225", "" ], "abstract": [ "Recent proposals for network virtualization provide a promising way to overcome the Internet ossification. The key idea of network virtualization is to build a diversified Internet to support a variety of network services and architectures through a shared substrate. A major challenge in network virtualization is the assigning of substrate resources to virtual networks (VN) efficiently and on-demand. This paper focuses on two versions of the VN assignment problem: VN assignment without reconfiguration (VNA-I) and VN assignment with reconfiguration (VNAII). For the VNA-I problem, we develop a basic scheme as a building block for all other advanced algorithms. Subdividing heuristics and adaptive optimization strategies are then presented to further improve the performance. For the VNA-II problem, we develop a selective VN reconfiguration scheme that prioritizes the reconfiguration of the most critical VNs. Extensive simulation experiments demonstrate that the proposed algorithms can achieve good performance under a wide range of network conditions.", "", "Network slicing has been identified as the backbone of the rapidly evolving 5G technology. However, as its consolidation and standardization progress, there are no literatures that comprehensively discuss its key principles, enablers, and research challenges. This paper elaborates network slicing from an end-to-end perspective detailing its historical heritage, principal concepts, enabling technologies and solutions as well as the current standardization efforts. In particular, it overviews the diverse use cases and network requirements of network slicing, the pre-slicing era, considering RAN sharing as well as the end-to-end orchestration and management, encompassing the radio access, transport network and the core network. This paper also provides details of specific slicing solutions for each part of the 5G system. Finally, this paper identifies a number of open research challenges and provides recommendations toward potential solutions.", "5G is envisioned to be a multi-service network supporting a wide range of verticals with a diverse set of performance and service requirements. Slicing a single physical network into multiple isolated logical networks has emerged as a key to realizing this vision. This article is meant to act as a survey, the first to the authors� knowledge, on this topic of prime interest. We begin by reviewing the state of the art in 5G network slicing and present a framework for bringing together and discussing existing work in a holistic manner. Using this framework, we evaluate the maturity of current proposals and identify a number of open research questions.", "We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.", "" ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
Network slice resource allocation is a complex problem. When a slice instance is seen as a collection of SFCs, slice embedding needs to deploy the SFCs on a shared infrastructure while satisfying various constraints. Most of prior works related to SFC and VNF deployment do not account for coverage constraints. For example, in @cite_14 @cite_0 , computing, storage, and aggregated wireless resource demands of SFCs are considered. The minimization of the SFC embedding cost is formulated either as an (ILP) @cite_0 @cite_2 @cite_3 or as a (MILP) problem @cite_17 @cite_28 , which are known to be NP-hard @cite_27 . In @cite_24 , the VNF placement problem is expressed as an (IQP) problem with a set of energy consumption constraints, and then is transformed to a solvable linear form.
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_3", "@cite_0", "@cite_24", "@cite_27", "@cite_2", "@cite_17" ], "mid": [ "2334600287", "2964125711", "2474498303", "2739867585", "2962948535", "2132238781", "1578960134", "" ], "abstract": [ "Network function virtualization (NFV) sits firmly on the networking evolutionary path. By migrating network functions from dedicated devices to general purpose computing platforms, NFV can help reduce the cost to deploy and operate large IT infrastructures. In particular, NFV is expected to play a pivotal role in mobile networks where significant cost reductions can be obtained by dynamically deploying and scaling virtual network functions (VNFs) in the core network. However, in order to achieve its full potential, NFV needs to extend its reach also to the radio access segment. Here, mobile virtual network operators shall be allowed to request radio access VNFs with custom resource allocation solutions. Such a requirement raises several challenges in terms of performance isolation and resource provisioning. In this work, we formalize the wireless VNF placement problem in the radio access network as an integer linear programming problem and we propose a VNF placement heuristic, named wireless network embedding (WiNE), to solve the problem. Moreover, we present a proof-of-concept implementation of an NFV management and orchestration framework for enterprise WLANs. The proposed architecture builds on a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing capable nodes.", "Network function virtualization enables the “softwarization” of network functions, which are implemented on virtual machines hosted on commercial off-the-shelf servers. Both the composition of the virtual network functions into a forwarding graph (FG) at the logical layer and the embedding of the FG on the servers need to consider the less-than-carrier-grade reliability of COTS components. This letter investigates the tradeoff between end-to-end reliability and computational load per server via the joint design of VNF chain composition (CC) and FG embedding (FGE) under the assumption of a bipartite FG that consists of a controller and regular VNFs. Evaluating the reliability criterion within a probabilistic model, analytical insights are first provided for a simplified disconnected FG. Then, a block coordinate descent method based on mixed-integer linear programming is proposed to tackle the joint optimization of CC and FGE. Via simulation results, it is observed that a joint design of CC and FGE leads to substantial performance gains compared with separate optimization approaches.", "Network Functions Visualization is focused on migrating traditional hardware-based network functions to software-based appliances running on standard high volume severs. There are a variety of challenges facing early adopters of Network Function Virtualizations; key among them are resource and service mapping, to support virtual network function orchestration. Service providers need efficient and effective mapping capabilities to optimally deploy network services. This paper describes TeNOR, a micro-service based network function virtualisation orchestrator capable of effectively addressing resource and network service mapping. The functional architecture and data models of TeNOR are described, as well as two proposed approaches to address the resource mapping problem. Key evaluation results are discussed and an assessment of the mapping approaches is performed in terms of the service acceptance ratio and scalability of the proposed approaches.", "With Network Function Virtualization (NFV), network functions are deployed as modular software components on the commodity hardware, and can be further chained to provide services, offering much greater flexibility and lower cost of the service deployment for the network operators. At the same time, replacing the network functions implemented in purpose built hardware with software modules poses a great challenge for the operator to maintain the same level of performance. The grade of service promised to the end users is formalized in the Service Level Agreement (SLA) that typically contains the QoS parameters, such as minimum guaranteed data rate, maximum end to end latency, port availability and packet loss. State of the art solutions can guarantee only data rate and latency requirements, while service availability, which is an important service differentiator is mostly neglected. This paper focuses on the placement of virtualized network functions, aiming to support service differentiation between the users, while minimizing the associated service deployment cost for the operator. Two QoS-aware placement strategies are presented, an optimal solution based on the Integer Linear Programming (ILP) problem formulation and an efficient heuristic to obtain near optimal solution. Considering a national core network case study, we show the cost overhead of availability-awareness, as well as the risk of SLA violation when availability constraint is neglected. We also compare the proposed function placement heuristic to the optimal solution in terms of cost efficiency and execution time, and demonstrate that it can provide a good estimation of the deployment cost in much shorter time.", "Service function chaining (SFC) allows the forwarding of traffic flows along a chain of virtual network functions (VNFs). Software defined networking (SDN) solutions can be used to support SFC to reduce both the management complexity and the operational costs. One of the most critical issues for the service and network providers is the reduction of energy consumption, which should be achieved without impacting the Quality of Service. In this paper, we propose a novel resource allocation architecture which enables energy-aware SFC for SDN-based networks, considering also constraints on delay, link utilization, server utilization. To this end, we formulate the problems of VNF placement, allocation of VNFs to flows, and flow routing as integer linear programming (ILP) optimization problems. Since the formulated problems cannot be solved (using ILP solvers) in acceptable timescales for realistic problem dimensions, we design a set of heuristic to find near-optimal solutions in timescales suitable for practical applications. We numerically evaluate the performance of the proposed algorithms over a real-world topology under various network traffic patterns. Our results confirm that the proposed heuristic algorithms provide near-optimal solutions (at most 14 optimality-gap) while their execution time makes them usable for real-life networks.", "Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as \"Virtual Network Embedding (VNE)\" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.", "Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.", "" ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
To address the high computational complexity resulting from the ILPs or MILPs, various heuristics have been proposed, see, , @cite_14 @cite_0 @cite_2 . For example, @cite_14 introduced an heuristic based on the search of shortest paths to sequentially embed the SFCs. In @cite_0 , the candidate infrastructure nodes are sorted to find the best node, in terms of deployment cost, to host a given VNF. Its neighbors are then considered as candidates to deploy the next VNF.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_2" ], "mid": [ "2739867585", "2334600287", "1578960134" ], "abstract": [ "With Network Function Virtualization (NFV), network functions are deployed as modular software components on the commodity hardware, and can be further chained to provide services, offering much greater flexibility and lower cost of the service deployment for the network operators. At the same time, replacing the network functions implemented in purpose built hardware with software modules poses a great challenge for the operator to maintain the same level of performance. The grade of service promised to the end users is formalized in the Service Level Agreement (SLA) that typically contains the QoS parameters, such as minimum guaranteed data rate, maximum end to end latency, port availability and packet loss. State of the art solutions can guarantee only data rate and latency requirements, while service availability, which is an important service differentiator is mostly neglected. This paper focuses on the placement of virtualized network functions, aiming to support service differentiation between the users, while minimizing the associated service deployment cost for the operator. Two QoS-aware placement strategies are presented, an optimal solution based on the Integer Linear Programming (ILP) problem formulation and an efficient heuristic to obtain near optimal solution. Considering a national core network case study, we show the cost overhead of availability-awareness, as well as the risk of SLA violation when availability constraint is neglected. We also compare the proposed function placement heuristic to the optimal solution in terms of cost efficiency and execution time, and demonstrate that it can provide a good estimation of the deployment cost in much shorter time.", "Network function virtualization (NFV) sits firmly on the networking evolutionary path. By migrating network functions from dedicated devices to general purpose computing platforms, NFV can help reduce the cost to deploy and operate large IT infrastructures. In particular, NFV is expected to play a pivotal role in mobile networks where significant cost reductions can be obtained by dynamically deploying and scaling virtual network functions (VNFs) in the core network. However, in order to achieve its full potential, NFV needs to extend its reach also to the radio access segment. Here, mobile virtual network operators shall be allowed to request radio access VNFs with custom resource allocation solutions. Such a requirement raises several challenges in terms of performance isolation and resource provisioning. In this work, we formalize the wireless VNF placement problem in the radio access network as an integer linear programming problem and we propose a VNF placement heuristic, named wireless network embedding (WiNE), to solve the problem. Moreover, we present a proof-of-concept implementation of an NFV management and orchestration framework for enterprise WLANs. The proposed architecture builds on a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing capable nodes.", "Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
In @cite_35 , the join VNF and virtual link placement is formulated as a (WGMP), where the SFC graph and the infrastructure graph are modeled as weighted graphs, on which each node and each link have their own weight corresponding to their required resource (for the SFC graph), or their available resource (for the infrastructure graph). An -based method is then proposed to solve the WGMP problem, which aim is to find, with a significantly reduced complexity, the optimum matching between the SFC graph and the infrastructure graph. In @cite_10 , @cite_23 , and @cite_35 , a unique type of resource is considered at infrastructure nodes (processing) and at links (bandwidth). Radio resource is not considered.
{ "cite_N": [ "@cite_35", "@cite_10", "@cite_23" ], "mid": [ "2490169957", "2725479901", "2623646697" ], "abstract": [ "Network function virtualization (NFV) decouples software implementations of network functions from their hosts (or hardware). NFV exposes a new set of entities, the virtualized network functions (VNFs). The VNFs can be chained with other VNFs and physical network functions to realize network services. This flexibility introduced by NFV allows service providers to respond in an agile manner to variable service demands and changing business goals. In this context, the efficient establishment of service chains and their placement becomes essential to reduce capital and operational expenses and gain in service agility. This paper addresses the placement aspect of these service chains by finding the best locations and hosts for the VNFs and to steer traffic across these functions while respecting user requirements and maximizing provider revenue. We propose a novel eigendecomposition-based approach for the placement of virtual and physical network function chains in networks and cloud environments. A heuristic based on a custom greedy algorithm is also presented to compare performance and assess the capability of the eigendecomposition approach. The performance of both algorithms is compared to a multi-stage-based method from the state of the art that also addresses the chaining of network services. Performance evaluation results show that our matrix-based method, eigendecomposition of adjacency matrices, has reduced complexity and convergence times that essentially depend only on the physical graph sizes. Our proposal also outperforms the related work in provider’s revenue and acceptance rate.", "Software-Defined Networking is a new approach to the design and management of networks. It decouples the software-based control plane from the hardware-based data plane while abstracting the underlying network infrastructure and moving the network intelligence to a centralized software-based controller where network services are deployed. The challenge is then to efficiently provision the service chain requests, while finding the best compromise between the bandwidth requirements, the number of locations for hosting Virtual Network Functions (VNFs), and the number of chain occurrences. We propose two ILP (Integer Linear Programming) models for routing service chain requests, one of them with a decomposition modeling. We conduct extensive numerical experiments, and show we can solve exactly the routing of service chain requests in a few minutes for networks with up to 50 nodes, and traffic requests between all pairs of nodes. We investigate the best compromise between the bandwidth requirements and the number of VNF nodes.", "Network function virtualization (NFV) is a promising technology to decouple the network functions from dedicated hardware elements, leading to the significant cost reduction in network service provisioning. As more and more users are trying to access their services wherever and whenever, we expect the NFV-related service function chains (SFCs) to be dynamic and adaptive, i.e., they can be readjusted to adapt to the service requests’ dynamics for better user experience. In this paper, we study how to optimize SFC deployment and readjustment in the dynamic situation. Specifically, we try to jointly optimize the deployment of new users’ SFCs and the readjustment of in-service users’ SFCs while considering the trade-off between resource consumption and operational overhead. We first formulate an integer linear programming (ILP) model to solve the problem exactly. Then, to reduce the time complexity, we design a column generation (CG) model for the optimization. Simulation results show that the proposed CG-based algorithm can approximate the performance of the ILP and outperform an existing benchmark in terms of the profit from service provisioning." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
The design of efficient allocation mechanisms for virtualized radio resources has been recently addressed in @cite_4 . This paper aims at minimizing the leasing cost of BSs so as to meet SP demands, while providing, with a given probability, a minimum data rate for any user located in their coverage area. The rate constraint is expressed as a linear function of the BS load (number of users served by the BS), of the distance from user to the nearest BS, and of the downlink interference. This linear approximation, however, requires some assumptions. For instance, a user of an SP is assumed to be served by its nearest BS among the set of BSs allocated to the SP. This reduces somehow the potentiality of achieving the optimal sharing of the radio resource.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963806969" ], "abstract": [ "Wireless network virtualization is emerging as an important technology for next-generation (5G) wireless networks. A key advantage of introducing virtualization in cellular networks is that service providers can robustly share virtualized network resources (e.g., infrastructure and spectrum) to extend coverage, increase capacity, and reduce costs. However, the inherent features of wireless networks, i.e., the uncertainty in user equipment (UE) locations and channel conditions impose significant challenges on virtualization and sharing of the network resources. In this context, we propose a stochastic optimization-based virtualization framework that enables robust sharing of network resources. Our proposed scheme aims at probabilistically guaranteeing UEs' Quality of Service (QoS) demand satisfaction, while minimizing the cost for service providers, with reasonable computational complexity and affordable network overhead." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
In @cite_36 , an heterogeneous spatial user density is considered, and the joint BS selection and adaptive slicing are formulated as a two-stage stochastic optimization problem. The first stage aims at defining the set of BSs to activate. The second stage aims at allocating wireless resources of the BSs to each point of the region to be covered by the SP. Several random realizations of user locations are generated to get a reduced-complexity deterministic optimization problem. A genetic algorithm is then used for the optimization.
{ "cite_N": [ "@cite_36" ], "mid": [ "2937641582" ], "abstract": [ "Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. Virtualization focuses on the concept of active resource sharing and the building of a network designed for specific demands, decreasing operational expenditures, and improving demand satisfaction of cellular networks. This work investigates the problem of selecting base stations (BSs) to construct a virtual network that meets the the specific demands of a service provider, and adaptive slicing of the resources between the service provider’s demand points. A two-stage stochastic optimization framework is introduced to model the problem of joint BS selection and adaptive slicing. Two methods are presented for determining an approximation for the two-stage stochastic optimization model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for BS selection and adaptive slicing via a single-stage linear optimization problem. For testing, a number of scenarios were generated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably good solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a vast improvement in run time with the introduction of some error." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
In @cite_34 , a network slicing framework for (H-CRAN) is introduced. The sharing of radio resources in terms of data rate is considered, with some constraints related to the fronthaul capacity, the transmission power budget of RRHs, or the tolerable interference threshold of an RRH on a sub-channel. Slicing is formulated as a weighted throughput maximization problem, which aims at maximizing the total rate obtained by users connected to given RRHs on given sub-channels. Nevertheless, the proposed framework does not consider computing and memory resource associated to the processing within the BBUs. Such resource is assumed to be properly scaled so as to support the required service rate. Moreover, the proposed framework addresses only downlink data services.
{ "cite_N": [ "@cite_34" ], "mid": [ "2601022114" ], "abstract": [ "Research on network slicing for multi-tenant heterogeneous cloud radio access networks (H-CRANs) is still in its infancy. In this paper, we redefine network slicing and propose a new network slicing framework for multi-tenant H-CRANs. In particular, the network slicing process is formulated as a weighted throughput maximization problem that involves sharing of computational resources, fronthaul capacity, physical remote radio heads and radio resources. The problem is then jointly solved using a sub-optimal greedy approach and a dual decomposition method. Simulation results demonstrate that the framework can flexibly scale the throughput performance of multiple tenants according to the user priority weights associated with the tenants." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
The wireless network slicing problem is also addressed in @cite_26 . A game theory-based distributed algorithm to solve the problem is proposed. The proposed algorithm accounts for the limited availability of wireless resources and considers different aspects such as congestion, deployment costs and the RRH-user distance. This work considered the coverage area of RRH, but ignored the possible coverage constraints required by the slices.
{ "cite_N": [ "@cite_26" ], "mid": [ "2964112281" ], "abstract": [ "Radio access network (RAN) slicing is an effective methodology to dynamically allocate networking resources in 5G networks. One of the main challenges of RAN slicing is that it is provably an NP-Hard problem. For this reason, we design near-optimal low-complexity distributed RAN slicing algorithms. First, we model the slicing problem as a congestion game, and demonstrate that such game admits a unique Nash equilibrium (NE). Then, we evaluate the Price of Anarchy (PoA) of the NE, i.e., the efficiency of the NE as compared with the social optimum, and demonstrate that the PoA is upper-bounded by 3 2. Next, we propose two fully-distributed algorithms that provably converge to the unique NE without revealing privacy-sensitive parameters from the slice tenants. Moreover, we introduce an adaptive pricing mechanism of the wireless resources to improve the network owner’s profit. We evaluate the performance of our algorithms through simulations and an experimental testbed deployed on the Amazon EC2 cloud, both based on a real-world dataset of base stations from the OpenCellID project. Results conclude that our algorithms converge to the NE rapidly and achieve near-optimal performance, while our pricing mechanism effectively improves the profit of the network owner." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
Compared to previous works, we consider slice resource demands in terms of coverage and traffic requirements in the radio access part of the network as well as network, storage, and computing requirements from a cloud infrastructure of interconnected data centers for the rest of the network. This work borrows the slice resource provisioning approach introduced in @cite_20 , and adapts it to the joint radio and network infrastructure resource provisioning. Constraints related to the infrastructure network considered in @cite_14 @cite_0 @cite_20 @cite_32 are combined with coverage and radio resource constraints introduced in @cite_4 @cite_36 @cite_26 @cite_34 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_26", "@cite_36", "@cite_32", "@cite_0", "@cite_34", "@cite_20" ], "mid": [ "2334600287", "2963806969", "2964112281", "2937641582", "2913775178", "2739867585", "2601022114", "2916661620" ], "abstract": [ "Network function virtualization (NFV) sits firmly on the networking evolutionary path. By migrating network functions from dedicated devices to general purpose computing platforms, NFV can help reduce the cost to deploy and operate large IT infrastructures. In particular, NFV is expected to play a pivotal role in mobile networks where significant cost reductions can be obtained by dynamically deploying and scaling virtual network functions (VNFs) in the core network. However, in order to achieve its full potential, NFV needs to extend its reach also to the radio access segment. Here, mobile virtual network operators shall be allowed to request radio access VNFs with custom resource allocation solutions. Such a requirement raises several challenges in terms of performance isolation and resource provisioning. In this work, we formalize the wireless VNF placement problem in the radio access network as an integer linear programming problem and we propose a VNF placement heuristic, named wireless network embedding (WiNE), to solve the problem. Moreover, we present a proof-of-concept implementation of an NFV management and orchestration framework for enterprise WLANs. The proposed architecture builds on a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing capable nodes.", "Wireless network virtualization is emerging as an important technology for next-generation (5G) wireless networks. A key advantage of introducing virtualization in cellular networks is that service providers can robustly share virtualized network resources (e.g., infrastructure and spectrum) to extend coverage, increase capacity, and reduce costs. However, the inherent features of wireless networks, i.e., the uncertainty in user equipment (UE) locations and channel conditions impose significant challenges on virtualization and sharing of the network resources. In this context, we propose a stochastic optimization-based virtualization framework that enables robust sharing of network resources. Our proposed scheme aims at probabilistically guaranteeing UEs' Quality of Service (QoS) demand satisfaction, while minimizing the cost for service providers, with reasonable computational complexity and affordable network overhead.", "Radio access network (RAN) slicing is an effective methodology to dynamically allocate networking resources in 5G networks. One of the main challenges of RAN slicing is that it is provably an NP-Hard problem. For this reason, we design near-optimal low-complexity distributed RAN slicing algorithms. First, we model the slicing problem as a congestion game, and demonstrate that such game admits a unique Nash equilibrium (NE). Then, we evaluate the Price of Anarchy (PoA) of the NE, i.e., the efficiency of the NE as compared with the social optimum, and demonstrate that the PoA is upper-bounded by 3 2. Next, we propose two fully-distributed algorithms that provably converge to the unique NE without revealing privacy-sensitive parameters from the slice tenants. Moreover, we introduce an adaptive pricing mechanism of the wireless resources to improve the network owner’s profit. We evaluate the performance of our algorithms through simulations and an experimental testbed deployed on the Amazon EC2 cloud, both based on a real-world dataset of base stations from the OpenCellID project. Results conclude that our algorithms converge to the NE rapidly and achieve near-optimal performance, while our pricing mechanism effectively improves the profit of the network owner.", "Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. Virtualization focuses on the concept of active resource sharing and the building of a network designed for specific demands, decreasing operational expenditures, and improving demand satisfaction of cellular networks. This work investigates the problem of selecting base stations (BSs) to construct a virtual network that meets the the specific demands of a service provider, and adaptive slicing of the resources between the service provider’s demand points. A two-stage stochastic optimization framework is introduced to model the problem of joint BS selection and adaptive slicing. Two methods are presented for determining an approximation for the two-stage stochastic optimization model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for BS selection and adaptive slicing via a single-stage linear optimization problem. For testing, a number of scenarios were generated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably good solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a vast improvement in run time with the introduction of some error.", "The concepts of network function virtualization and end-to-end network slicing are the two promising technologies empowering 5G networks for efficient and dynamic network service deployment and management. In this paper, we propose a resource allocation model for 5G virtualized networks in a heterogeneous cloud infrastructure. In our model, each network slice has a resource demand vector for each of its virtual network functions. We first consider a system of collaborative slices and formulate the resource allocation as a convex optimization problem, maximizing the overall system utility function. We further introduce a distributed solution for the resource allocation problem by forming a resource auction between the slices and the data centers. By using an example, we show how the selfish behavior of non-collaborative slices affects the fairness performance of the system. For a system with non-collaborative slices, we formulate a new resource allocation problem based on the notion of dominant resource fairness and propose a fully distributed scheme for solving the problem. Simulation results are provided to show the validity of the results, evaluate the convergence of the distributed solutions, show protection of collaborative slices against non-collaborative slices and compare the performance of the optimal schemes with the heuristic ones.", "With Network Function Virtualization (NFV), network functions are deployed as modular software components on the commodity hardware, and can be further chained to provide services, offering much greater flexibility and lower cost of the service deployment for the network operators. At the same time, replacing the network functions implemented in purpose built hardware with software modules poses a great challenge for the operator to maintain the same level of performance. The grade of service promised to the end users is formalized in the Service Level Agreement (SLA) that typically contains the QoS parameters, such as minimum guaranteed data rate, maximum end to end latency, port availability and packet loss. State of the art solutions can guarantee only data rate and latency requirements, while service availability, which is an important service differentiator is mostly neglected. This paper focuses on the placement of virtualized network functions, aiming to support service differentiation between the users, while minimizing the associated service deployment cost for the operator. Two QoS-aware placement strategies are presented, an optimal solution based on the Integer Linear Programming (ILP) problem formulation and an efficient heuristic to obtain near optimal solution. Considering a national core network case study, we show the cost overhead of availability-awareness, as well as the risk of SLA violation when availability constraint is neglected. We also compare the proposed function placement heuristic to the optimal solution in terms of cost efficiency and execution time, and demonstrate that it can provide a good estimation of the deployment cost in much shorter time.", "Research on network slicing for multi-tenant heterogeneous cloud radio access networks (H-CRANs) is still in its infancy. In this paper, we redefine network slicing and propose a new network slicing framework for multi-tenant H-CRANs. In particular, the network slicing process is formulated as a weighted throughput maximization problem that involves sharing of computational resources, fronthaul capacity, physical remote radio heads and radio resources. The problem is then jointly solved using a sub-optimal greedy approach and a dual decomposition method. Simulation results demonstrate that the framework can flexibly scale the throughput performance of multiple tenants according to the user priority weights associated with the tenants.", "Network slicing has recently appeared as a key enabler for the future 5G networks where Mobile Network Operators (MNO) create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is highly important for future deployment. In this paper, taking the InP perspective, we propose an optimization framework for slice resource provisioning addressing multiple slice demands in terms of computing, storage, and wireless capacity. We assume that the aggregated resource requirements of the various Service Function Chains to be deployed within a slice may be represented by a graph of slice resource demands. Infrastructure nodes and links have then to be provisioned so as to satisfy these resource demands. A Mixed Integer Linear Programming formulation is considered to address this problem. A realistic use case of slices deployment over a mobile access network is then considered. Simulation results demonstrate the effectiveness of the proposed framework for network slice provisioning." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
In this work, we assume that the resource requirements for the various SFCs that will have to be deployed within a slice may be aggregated and represented by a Slice Resource Demand (SRD) graph that mimics the graph of SFCs. These SRDs are evaluated by the MNO to satisfy the QoS requirements imposed by the tenant. The InP has then to provision enough infrastructure resources to meet the SLA. Due to the fact that nodes or links of the graph of SRDs represent aggregate requirements, several infrastructure nodes may have to be gathered and parallel physical links have to be considered to satisfy the various SRDs. This is the main difference with respect to the traditional service chain embedding approach considered for example in @cite_14 @cite_0 , where each VNF is deployed on a single node. In @cite_14 @cite_0 , virtual nodes and links are mapped on the infrastructure network to allocate resources to VNFs and virtual links. In this paper, one provisions a sufficient number of infrastructure nodes and links, so that the aggregated provisioned resources meet the slice demands represented by the graph of SRDs.
{ "cite_N": [ "@cite_0", "@cite_14" ], "mid": [ "2739867585", "2334600287" ], "abstract": [ "With Network Function Virtualization (NFV), network functions are deployed as modular software components on the commodity hardware, and can be further chained to provide services, offering much greater flexibility and lower cost of the service deployment for the network operators. At the same time, replacing the network functions implemented in purpose built hardware with software modules poses a great challenge for the operator to maintain the same level of performance. The grade of service promised to the end users is formalized in the Service Level Agreement (SLA) that typically contains the QoS parameters, such as minimum guaranteed data rate, maximum end to end latency, port availability and packet loss. State of the art solutions can guarantee only data rate and latency requirements, while service availability, which is an important service differentiator is mostly neglected. This paper focuses on the placement of virtualized network functions, aiming to support service differentiation between the users, while minimizing the associated service deployment cost for the operator. Two QoS-aware placement strategies are presented, an optimal solution based on the Integer Linear Programming (ILP) problem formulation and an efficient heuristic to obtain near optimal solution. Considering a national core network case study, we show the cost overhead of availability-awareness, as well as the risk of SLA violation when availability constraint is neglected. We also compare the proposed function placement heuristic to the optimal solution in terms of cost efficiency and execution time, and demonstrate that it can provide a good estimation of the deployment cost in much shorter time.", "Network function virtualization (NFV) sits firmly on the networking evolutionary path. By migrating network functions from dedicated devices to general purpose computing platforms, NFV can help reduce the cost to deploy and operate large IT infrastructures. In particular, NFV is expected to play a pivotal role in mobile networks where significant cost reductions can be obtained by dynamically deploying and scaling virtual network functions (VNFs) in the core network. However, in order to achieve its full potential, NFV needs to extend its reach also to the radio access segment. Here, mobile virtual network operators shall be allowed to request radio access VNFs with custom resource allocation solutions. Such a requirement raises several challenges in terms of performance isolation and resource provisioning. In this work, we formalize the wireless VNF placement problem in the radio access network as an integer linear programming problem and we propose a VNF placement heuristic, named wireless network embedding (WiNE), to solve the problem. Moreover, we present a proof-of-concept implementation of an NFV management and orchestration framework for enterprise WLANs. The proposed architecture builds on a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing capable nodes." ] }
1907.09211
2963774932
Network slicing appears as a key enabler for the future 5G networks. Mobile Network Operators create various slices for Service Providers (SP) to accommodate customized services. As network slices are operated on a common network infrastructure owned by some Infrastructure Provider (InP), sharing the resources across a set of network slices is important for future deployment. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the InP will be able to allocate enough resources to cope with the increasing demands of some SP. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints. The resource requirements of the various Service Function Chains to be deployed within a slice are aggregated within a graph of Slice Resource Demands (SRD). Coverage and rate constraints are also taken into account in the SRD. Infrastructure nodes and links have then to be provisioned so as to satisfy all types of resource demands. This problem leads to a Mixed Integer Linear Programming formulation. A two-step deployment approach is considered, with several variants, depending on whether the constraints of each slide to be deployed are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph representing the nodes and links on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.
When provisioning slices, we consider coverage constraints, in which slices are assumed to cover a specific region in the considered geographical area, that is part of the SLA with the tenant. We devise the special case of the cloud RAN architecture with RRHs which are nodes having radio resources. In our model, radio resource blocks are allocated and the channel between the RRH nodes and users is taken into account. Compared with @cite_4 , the selected BS is not necessarily the nearest one. Moreover, both downlink and uplink traffic are considered for the service rate model.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963806969" ], "abstract": [ "Wireless network virtualization is emerging as an important technology for next-generation (5G) wireless networks. A key advantage of introducing virtualization in cellular networks is that service providers can robustly share virtualized network resources (e.g., infrastructure and spectrum) to extend coverage, increase capacity, and reduce costs. However, the inherent features of wireless networks, i.e., the uncertainty in user equipment (UE) locations and channel conditions impose significant challenges on virtualization and sharing of the network resources. In this context, we propose a stochastic optimization-based virtualization framework that enables robust sharing of network resources. Our proposed scheme aims at probabilistically guaranteeing UEs' Quality of Service (QoS) demand satisfaction, while minimizing the cost for service providers, with reasonable computational complexity and affordable network overhead." ] }
1907.09128
2963975574
In this work, we present a modified fuzzy decision forest for real-time 3D object pose estimation based on typical template representation. We employ an extra preemptive background rejector node in the decision forest framework to terminate the examination of background locations as early as possible, result in a significantly improvement on efficiency. Our approach is also scalable to large dataset since the tree structure naturally provides a logarithm time complexity to the number of objects. Finally we further reduce the validation stage with a fast breadth-first scheme. The results show that our approach outperform the state-of-the-arts on the efficiency while maintaining a comparable accuracy.
To effectively measure the similarity between object views, a compact and discriminative description vector is required. @cite_12 present a novel image representation, a rigid template using colour gradient and surface normal as feature descriptors called LineMOD. The templates are synthetically rendered from 3D object mesh models under different scales and view angles. Similar to other traditional template matching approaches, each template matches with all possible locations across the query image to produce a similarity score map. Despite the exhaustive search, it achieves real-time speed for single object pose estimation.
{ "cite_N": [ "@cite_12" ], "mid": [ "1526868886" ], "abstract": [ "We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13 with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods." ] }
1907.09128
2963975574
In this work, we present a modified fuzzy decision forest for real-time 3D object pose estimation based on typical template representation. We employ an extra preemptive background rejector node in the decision forest framework to terminate the examination of background locations as early as possible, result in a significantly improvement on efficiency. Our approach is also scalable to large dataset since the tree structure naturally provides a logarithm time complexity to the number of objects. Finally we further reduce the validation stage with a fast breadth-first scheme. The results show that our approach outperform the state-of-the-arts on the efficiency while maintaining a comparable accuracy.
Hash table is a well-known data structure that allows a symbol lookup in @math complexity. In other words, the searching time is constant regardless of the database size. However, hash table is only able to find the exact match while in ANN searching problem we seek approximate matches. The most straight forward solution is hashing the whole quantised feature space into a single hash table so that every possible query point directly maps to their nearest data points. Unfortunately this naive approach is no longer feasible for high-dimensional data. A recent work @cite_4 employs hashing techniques to achieve sublinear scalability by exploring different hashing key learning strategies and achieve sublinear complexity to the number of templates and outperform the state-of-the-art in terms of runtime.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963203908" ], "abstract": [ "We present a scalable method for detecting objects and estimating their 3D poses in RGB-D data. To this end, we rely on an efficient representation of object views and employ hashing techniques to match these views against the input frame in a scalable way. While a similar approach already exists for 2D detection, we show how to extend it to estimate the 3D pose of the detected objects. In particular, we explore different hashing strategies and identify the one which is more suitable to our problem. We show empirically that the complexity of our method is sublinear with the number of objects and we enable detection and pose estimation of many 3D objects with high accuracy while outperforming the state-of-the-art in terms of runtime." ] }
1907.09128
2963975574
In this work, we present a modified fuzzy decision forest for real-time 3D object pose estimation based on typical template representation. We employ an extra preemptive background rejector node in the decision forest framework to terminate the examination of background locations as early as possible, result in a significantly improvement on efficiency. Our approach is also scalable to large dataset since the tree structure naturally provides a logarithm time complexity to the number of objects. Finally we further reduce the validation stage with a fast breadth-first scheme. The results show that our approach outperform the state-of-the-arts on the efficiency while maintaining a comparable accuracy.
The methods fall into this category focus on better generalisation to slight variations in translation, local shape and viewpoint. The explicit background foreground separation is learnt parametrically to deal with heavy background clutter. The result shows these approaches cause less false positives than nearest neighbour approaches. However, the efficacy is their dependency on the quality of negative training samples, and this benefit may not transfer across different domains. Tejani al @cite_7 propose to incorporate a one-class learning scheme into the hough forest framework for 6-DoF problems. Rios-Cabrera and Tuytelaars @cite_13 extend LineMOD by learning the templates in a discriminative fashion and handle 10-30 3D objects at frame rates above 10fps using a single CPU core.
{ "cite_N": [ "@cite_13", "@cite_7" ], "mid": [ "2050966058", "1022526533" ], "abstract": [ "In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D LINEMOD representation introduced recently by , yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images.", "In this paper we propose a novel framework, Latent-Class Hough Forests, for 3D object detection and pose estimation in heavily cluttered and occluded scenes. Firstly, we adapt the state-of-the-art template matching feature, LINEMOD [14], into a scale-invariant patch descriptor and integrate it into a regression forest using a novel template-based split function. In training, rather than explicitly collecting representative negative samples, our method is trained on positive samples only and we treat the class distributions at the leaf nodes as latent variables. During the inference process we iteratively update these distributions, providing accurate estimation of background clutter and foreground occlusions and thus a better detection rate. Furthermore, as a by-product, the latent class distributions can provide accurate occlusion aware segmentation masks, even in the multi-instance scenario. In addition to an existing public dataset, which contains only single-instance sequences with large amounts of clutter, we have collected a new, more challenging, dataset for multiple-instance detection containing heavy 2D and 3D clutter as well as foreground occlusions. We evaluate the Latent-Class Hough Forest on both of these datasets where we outperform state-of-the art methods." ] }
1907.09128
2963975574
In this work, we present a modified fuzzy decision forest for real-time 3D object pose estimation based on typical template representation. We employ an extra preemptive background rejector node in the decision forest framework to terminate the examination of background locations as early as possible, result in a significantly improvement on efficiency. Our approach is also scalable to large dataset since the tree structure naturally provides a logarithm time complexity to the number of objects. Finally we further reduce the validation stage with a fast breadth-first scheme. The results show that our approach outperform the state-of-the-arts on the efficiency while maintaining a comparable accuracy.
Registration-based methods attempt to fit a pose hypothesis to the observation, by iteratively update and minimise the discrepancy between the query sample and a sample rendered from the current pose hypothesis. A popular choice is the Iterative Closest Point (ICP) @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2004312117" ], "abstract": [ "Abstract This paper introduces a new method of registering point sets. The registration error is directly minimized using general-purpose non-linear optimization (the Levenberg–Marquardt algorithm). The surprising conclusion of the paper is that this technique is comparable in speed to the special-purpose Iterated Closest Point algorithm, which is most commonly used for this task. Because the routine directly minimizes an energy function, it is easy to extend it to incorporate robust estimation via a Huber kernel, yielding a basin of convergence that is many times wider than existing techniques. Finally, we introduce a data structure for the minimization based on the chamfer distance transform, which yields an algorithm that is both faster and more robust than previously described methods." ] }
1907.09128
2963975574
In this work, we present a modified fuzzy decision forest for real-time 3D object pose estimation based on typical template representation. We employ an extra preemptive background rejector node in the decision forest framework to terminate the examination of background locations as early as possible, result in a significantly improvement on efficiency. Our approach is also scalable to large dataset since the tree structure naturally provides a logarithm time complexity to the number of objects. Finally we further reduce the validation stage with a fast breadth-first scheme. The results show that our approach outperform the state-of-the-arts on the efficiency while maintaining a comparable accuracy.
One feasible solution is to cluster the templates into few sets, which has been proposed in few recent works. Hashmod @cite_4 clusters the templates with randomised forest and employs hashing techniques; Discriminatively Trained Templates (DTT) @cite_13 clusters the templates with a bottom-up clustering method and constructs strong classifiers using AdaBoost. The underlying reason is the clustered subsets share common relevant' feature dimensions, that is to say, the templates in a subset can be well classified using much less coordinates.
{ "cite_N": [ "@cite_13", "@cite_4" ], "mid": [ "2050966058", "2963203908" ], "abstract": [ "In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D LINEMOD representation introduced recently by , yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images.", "We present a scalable method for detecting objects and estimating their 3D poses in RGB-D data. To this end, we rely on an efficient representation of object views and employ hashing techniques to match these views against the input frame in a scalable way. While a similar approach already exists for 2D detection, we show how to extend it to estimate the 3D pose of the detected objects. In particular, we explore different hashing strategies and identify the one which is more suitable to our problem. We show empirically that the complexity of our method is sublinear with the number of objects and we enable detection and pose estimation of many 3D objects with high accuracy while outperforming the state-of-the-art in terms of runtime." ] }
1907.09081
2963906598
Detecting objects in a two-dimensional setting is often insufficient in the context of real-life applications where the surrounding environment needs to be accurately recognized and oriented in three-dimension (3D), such as in the case of autonomous driving vehicles. Therefore, accurately and efficiently detecting objects in the three-dimensional setting is becoming increasingly relevant to a wide range of industrial applications, and thus is progressively attracting the attention of researchers. Building systems to detect objects in 3D is a challenging task though, because it relies on the multi-modal fusion of data derived from different sources. In this paper, we study the effects of anchoring using the current state-of-the-art 3D object detector and propose Class-specific Anchoring Proposal (CAP) strategy based on object sizes and aspect ratios based clustering of anchors. The proposed anchoring strategy significantly increased detection accuracy's by 7.19 , 8.13 and 8.8 on Easy, Moderate and Hard setting of the pedestrian class, 2.19 , 2.17 and 1.27 on Easy, Moderate and Hard setting of the car class and 12.1 on Easy setting of cyclist class. We also show that the clustering in anchoring process also enhances the performance of the regional proposal network in proposing regions of interests significantly. Finally, we propose the best cluster numbers for each class of objects in KITTI dataset that improves the performance of detection model significantly.
The F-PointNet @cite_21 achieves notable performance for 3D object detection and birds-eye-view detection on cars, pedestrians and cyclists on the KITTI benchmark suite. This method uses a 2D Faster RCNN object detector to find 2D boxes including the object on RGB camera image. Subsequently, the detected boxes are extruded to identify the point cloud falling into the frustum corresponding to the boxes. The discovered point cloud is classified in a binary fashion to separate the points constructing the object of interest and 3D regression is conducted on the separated points. The main drawback of this method comes from the fact that the accuracy of the model is highly dependent on the accuracy of the 2D object detector on RGB image. For instance, if the 2D detector misses the object on RGB image, the second Network is not able to localize the missed object in the 3D space. Furthermore, the consecutive nature of this model (2D detector Network followed by 3D detector) extends the inference time, which is a noteworthy issue in the context of numerous applications, including autonomous vehicles.
{ "cite_N": [ "@cite_21" ], "mid": [ "2769205412" ], "abstract": [ "In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability." ] }
1907.09150
2963415882
We consider a generic empirical composition optimization problem, where there are empirical averages present both outside and inside nonlinear loss functions. Such a problem is of interest in various machine learning applications, and cannot be directly solved by standard methods such as stochastic gradient descent (SGD). We take a novel approach to solving this problem by reformulating the original minimization objective into an equivalent min-max objective, which brings out all the empirical averages that are originally inside the nonlinear loss functions. We exploit the rich structures of the reformulated problem and develop a stochastic primal-dual algorithm, SVRPDA-I, to solve the problem efficiently. We carry out extensive theoretical analysis of the proposed algorithm, obtaining the convergence rate, the total computation complexity and the storage complexity. In particular, the algorithm is shown to converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm, SVRPDA-II, which further reduces the memory requirement. Finally, we evaluate the performance of our algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques.
Composition optimization have attracted significant attention in optimization literature. The stochastic version of the problem , where the empirical averages are replaced by expectations, is studied in @cite_19 . The authors propose a two-timescale stochastic approximation algorithm known as SCGD, and establish convergence rates. In @cite_4 , the authors propose the ASC-PG algorithm by using a proximal gradient method to deal with nonsmooth regularizations. The works that are more closely related to our setting are @cite_11 and @cite_14 , which consider a finite-sum minimization problem (a special case of our general formulation ). In @cite_11 , the authors propose the compositional-SVRG methods, which combine SCGD with the SVRG technique from @cite_1 and obtain convergence rates. In @cite_14 , the authors propose the ASCVRG algorithms that extends to convex but non-smooth objectives.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_1", "@cite_19", "@cite_11" ], "mid": [ "2807364911", "2765393021", "2107438106", "1494085563", "2963008041" ], "abstract": [ "We propose an accelerated stochastic compositional variance reduced gradient method for optimizing the sum of a composition function and a convex nonsmooth function. We provide an (IFO) complexity analysis for the proposed algorithm and show that it is provably faster than all the existing methods. Indeed, we show that our method achieves an asymptotic IFO complexity of @math where @math and @math are the number of inner outer component functions, improving the best-known results of @math and achieving for for convex composition problem. Experiment results on sparse mean-variance optimization with 21 real-world financial datasets confirm that our method outperforms other competing methods.", "Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We further demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments.", "Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.", "Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem @math minxEvfv(Ew[gw(x)]). In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of @math fv,gw and use an auxiliary variable to track the unknown quantity @math Ewgw(x). We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of @math O(k-1 4) in the general case and @math O(k-2 3) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of @math O(k-2 7) in the general case and @math O(k-4 5) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.", "" ] }
1907.09150
2963415882
We consider a generic empirical composition optimization problem, where there are empirical averages present both outside and inside nonlinear loss functions. Such a problem is of interest in various machine learning applications, and cannot be directly solved by standard methods such as stochastic gradient descent (SGD). We take a novel approach to solving this problem by reformulating the original minimization objective into an equivalent min-max objective, which brings out all the empirical averages that are originally inside the nonlinear loss functions. We exploit the rich structures of the reformulated problem and develop a stochastic primal-dual algorithm, SVRPDA-I, to solve the problem efficiently. We carry out extensive theoretical analysis of the proposed algorithm, obtaining the convergence rate, the total computation complexity and the storage complexity. In particular, the algorithm is shown to converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm, SVRPDA-II, which further reduces the memory requirement. Finally, we evaluate the performance of our algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques.
Finally, our work is inspired by the stochastic variance reduction techniques in optimization @cite_0 @cite_1 @cite_23 @cite_3 @cite_6 , which considers the minimization of a cost that is a finite-sum of many component functions. Different versions of variance reduced stochastic gradients are constructed in these works to achieve linear convergence rate. In particular, our variance reduced stochastic estimators are constructed based on the idea of SVRG @cite_1 with a novel design of the control variates. Our work is also related to the SPDC algorithm @cite_6 , which also integrates dual coordinate ascent with variance reduced primal gradient. However, our work is different from SPDC in the following aspects. First, we consider a more general composition optimization problem while SPDC focuses on regularized empirical risk minimization with linear predictors, i.e., @math and @math is linear in @math . Second, because of the composition structures in the problem, our algorithms also need SVRG in the dual coordinate ascent update, while SPDC does not. Third, the primal update in SPDC is specifically designed for linear predictors. In contrast, our work is not restricted to that by using a novel variance reduced gradient.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_23" ], "mid": [ "2107438106", "2963357609", "2758918273", "2105875671", "2135482703" ], "abstract": [ "Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.", "We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which are common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the \"catalyst\" framework, leading to an algorithm which is always superior to accelerated batch algorithms.", "We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variables. An extrapolation step on the primal variables is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.", "We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.", "In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method." ] }
1907.09369
2963317101
In recent years, emotion detection in text has become more popular due to its vast potential applications in marketing, political science, psychology, human-computer interaction, artificial intelligence, etc. In this work, we argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context. These methods, therefore, are not sufficient to create an applicable and generalizable emotion detection methodology. Understanding these limitations, we present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models. The results show significant improvement with an average of 26.8 point increase in F-measure on our test data and 38.6 increase on the totally new dataset.
A lot of work has been done on detecting emotion in speech or visual data @cite_1 @cite_0 @cite_16 @cite_3 . But detecting emotions in textual data is a relatively new area that demands more research. There have been many attempts to detect emotions in text using conventional machine learning techniques and handcrafted features in which given the dataset, the authors try to find the best feature set that represents the most and the best information about the text, then passing the converted text as feature vectors to the classifier for training @cite_7 @cite_31 @cite_15 @cite_21 @cite_36 @cite_19 @cite_39 @cite_10 @cite_24 @cite_35 @cite_20 @cite_4 @cite_13 . During the process of creating the feature set, in these methods, some of the most important information in the text such as the sequential nature of the data, and the context will be lost.
{ "cite_N": [ "@cite_13", "@cite_35", "@cite_4", "@cite_7", "@cite_15", "@cite_36", "@cite_21", "@cite_1", "@cite_3", "@cite_39", "@cite_0", "@cite_19", "@cite_24", "@cite_31", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "", "", "2250511935", "", "", "2252073650", "2952362487", "", "2556247010", "2786205708", "", "2191779256", "", "", "2748543394", "", "" ], "abstract": [ "", "", "Predicting emotion categories, such as anger, joy, and anxiety, expressed by a sentence is challenging due to its inherent multi-label classification difficulty and data sparseness. In this paper, we address above two challenges by incorporating the label dependence among the emotion labels and the context dependence among the contextual instances into a factor graph model. Specifically, we recast sentence-level emotion classification as a factor graph inferring problem in which the label and context dependence are modeled as various factor functions. Empirical evaluation demonstrates the great potential and effectiveness of our proposed approach to sentencelevel emotion classification. 1", "", "", "The rise of micro-blogging in recent years has resulted in significant access to emotion-laden text. Unlike emotion expressed in other textual sources (e.g., blogs, quotes in newswire, email, product reviews, or even clinical text), micro-blogs differ by (1) placing a strict limit on length, resulting radically in new forms of emotional expression, and (2) encouraging users to express their daily thoughts in real-time, often resulting in far more emotion statements than might normally occur. In this paper, we introduce a corpus collected from Twitter with annotated micro-blog posts (or “tweets”) annotated at the tweet-level with seven emotions: ANGER, DISGUST, FEAR, JOY, LOVE, SADNESS, and SURPRISE. We analyze how emotions are distributed in the data we annotated and compare it to the distributions in other emotion-annotated corpora. We also used the annotated corpus to train a classifier that automatically discovers the emotions in tweets. In addition, we present an analysis of the linguistic style used for expressing emotions our corpus. We hope that these observations will lead to the design of novel emotion detection techniques that account for linguistic style and psycholinguistic theories.", "We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough target'' data to do slightly better than just using only source'' data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multi-domain adaptation problem, where one has data from a variety of different domains.", "", "Emotion recognition represents the position and motion of facial muscles. It contributes significantly in many fields. Current approaches have not obtained good results. This paper aimed to propose a new emotion recognition system based on facial expression images. We enrolled 20 subjects and let each subject pose seven different emotions: happy, sadness, surprise, anger, disgust, fear, and neutral. Afterward, we employed biorthogonal wavelet entropy to extract multiscale features, and used fuzzy multiclass support vector machine to be the classifier. The stratified cross validation was employed as a strict validation model. The statistical analysis showed our method achieved an overall accuracy of 96.77±0.10 . Besides, our method is superior to three state-of-the-art methods. In all, this proposed method is efficient.", "Techniques to detect the emotions expressed in microblogs and social media posts have a wide range of applications including, detecting psychological disorders such as anxiety or depression in individuals or measuring the public mood of a community. A major challenge for automated emotion detection is that emotions are subjective concepts with fuzzy boundaries and with variations in expression and perception. To address this issue, a dimensional model of affect is utilized to define emotion classes. Further, a soft classification approach is proposed to measure the probability of assigning a message to each emotion class. We develop and evaluate a supervised learning system to automatically classify emotion in text stream messages. Our approach includes two main tasks: an offline training task and an online classification task. The first task creates models to classify emotion in text messages. For the second task, we develop a two-stage framework called EmotexStream to classify live streams of text messages for the real-time emotion tracking. Moreover, we propose an online method to measure public emotion and detect emotion burst moments in live text streams.", "", "Social media and microblog tools are increasingly used by individuals to express their feelings and opinions in the form of short text messages. Detecting emotions in text has a wide range of applications including identifying anxiety or depression of individuals and measuring well-being or public mood of a community. In this paper, we propose a new approach for automatically classifying text messages of individuals to infer their emotional states. To model emotional states, we utilize the well-established Circumplex model that characterizes aective experience along two dimensions: valence and arousal. We select Twitter messages as input data set, as they provide a very large, diverse and freely avail- able ensemble of emotions. Using hash-tags as labels, our methodology trains supervised classiers to detect multiple classes of emotion on potentially huge data sets with no manual eort. We investigate the utility of several features for emotion detection, including unigrams, emoticons, negations and punctuations. To tackle the problem of sparse and high dimensional feature vectors of messages, we utilize a lexicon of emotions. We have compared the accuracy of several machine learning algorithms, including SVM, KNN, Decision Tree, and Naive Bayes for classifying Twitter messages. Our technique has an accuracy of over 90 , while demonstrating robustness across learning algorithms.", "", "", "AimEmotion recognition based on facial expression is an important field in affective computing. Current emotion recognition systems may suffer from two shortcomings: translation in facial image may deteriorate the recognition performance, and the classifier is not robust. MethodTo solve above two problems, our team proposed a novel intelligent emotion recognition system. Our method used stationary wavelet entropy to extract features, and employed a single hidden layer feedforward neural network as the classifier. To prevent the training of the classifier fall into local optimum points, we introduced the Jaya algorithm. ResultsThe simulation results over a 20-subject 700-image dataset showed our algorithm reached an overall accuracy of 96.800.14 . ConclusionThis proposed approach performs better than five state-of-the-art approaches in terms of overall accuracy. Besides, the db4 wavelet performs the best among other whole db wavelet family. The 4-level wavelet decomposition is superior to other levels. In the future, we shall test other advanced features and training algorithms.", "", "" ] }
1907.09369
2963317101
In recent years, emotion detection in text has become more popular due to its vast potential applications in marketing, political science, psychology, human-computer interaction, artificial intelligence, etc. In this work, we argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context. These methods, therefore, are not sufficient to create an applicable and generalizable emotion detection methodology. Understanding these limitations, we present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models. The results show significant improvement with an average of 26.8 point increase in F-measure on our test data and 38.6 increase on the totally new dataset.
Due to this sequential nature, recurrent and convolutional neural networks have been used in many NLP tasks and were able to improve the performance in a variety of classification tasks @cite_23 @cite_17 @cite_26 @cite_32 . There have been very few works in using deep neural network for emotion detection in text @cite_2 @cite_29 . These models can capture the complexity an context of the language better not only by keeping the sequential information but also by creating hidden representation for the text as a whole and by learning the important features without any additional (and often incomplete) human-designed features.
{ "cite_N": [ "@cite_26", "@cite_29", "@cite_32", "@cite_23", "@cite_2", "@cite_17" ], "mid": [ "2284289336", "2606292552", "2297405797", "2265846598", "2741447225", "" ], "abstract": [ "Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.", "Contact center chats are textual conversations involving customers and agents on queries, issues, grievances etc. about products and services. Contact centers conduct periodic analysis of these chats to measure customer satisfaction, of which the chat emotion forms one crucial component. Typically, these measures are performed at chat level. However, retrospective chat-level analysis is not sufficiently actionable for agents as it does not capture the variation in the emotion distribution across the chat. Towards that, we propose two novel weakly supervised approaches for detecting fine-grained emotions in contact center chat utterances in real time. In our first approach, we identify novel contextual and meta features and treat the task of emotion prediction as a sequence labeling problem. In second approach, we propose a neural net based method for emotion prediction in call center chats that does not require extensive feature engineering. We establish the effectiveness of the proposed methods by empirically evaluating them on a real-life contact center chat dataset. We achieve average accuracy of the order 72.6 with our first approach and 74.38 with our second approach respectively.", "Recent approaches based on artificial neural networks (ANNs) have shown promising results for short-text classification. However, many short texts occur in sequences (e.g., sentences in a document or utterances in a dialog), and most existing ANN-based systems do not leverage the preceding short texts when classifying a subsequent one. In this work, we present a model based on recurrent neural networks and convolutional neural networks that incorporates the preceding short texts. Our model achieves state-of-the-art results on three different datasets for dialog act prediction.", "Text classification is a foundational task in many NLP applications. Traditional text classifiers often rely on many human-designed features, such as dictionaries, knowledge bases and special tree kernels. In contrast to traditional methods, we introduce a recurrent convolutional neural network for text classification without human-designed features. In our model, we apply a recurrent structure to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks. We also employ a max-pooling layer that automatically judges which words play key roles in text classification to capture the key components in texts. We conduct experiments on four commonly used datasets. The experimental results show that the proposed method outperforms the state-of-the-art methods on several datasets, particularly on document-level datasets.", "", "" ] }
1907.09177
2963671871
Advanced neural language models (NLMs) are widely used in sequence generation tasks because they are able to produce fluent and meaningful sentences. They can also be used to generate fake reviews, which can then be used to attack online review systems and influence the buying decisions of online shoppers. A problem in fake review generation is how to generate the desired sentiment topic. Existing solutions first generate an initial review based on some keywords and then modify some of the words in the initial review so that the review has the desired sentiment topic. We overcome this problem by using the GPT-2 NLM to generate a large number of high-quality reviews based on a review with the desired sentiment and then using a BERT based text classifier (with accuracy of 96 ) to filter out reviews with undesired sentiments. Because none of the words in the review are modified, fluent samples like the training data can be generated from the learned distribution. A subjective evaluation with 80 participants demonstrated that this simple method can produce reviews that are as fluent as those written by people. It also showed that the participants tended to distinguish fake reviews randomly. Two countermeasures, GROVER and GLTR, were found to be able to accurately detect fake review.
The most common attack on online review systems is a crowdturfing attack @cite_7 @cite_18 whereby a bad actor recruits a group of workers to write fake reviews based on a specified topic for a specified context and then submits them to the target website. Since this method has an economic cost, it is typically limited to large-scale attacks. Automated crowdturfing, in which machine learning algorithms are used to generate fake review, is a less expensive and more efficient way to attack online review systems.
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "1916595307", "2962991180" ], "abstract": [ "Modern Web services inevitably engender abuse, as attackers find ways to exploit a service and its user base. However, while defending against such abuse is generally considered a technical endeavor, we argue that there is an increasing role played by human labor markets. Using over seven years of data from the popular crowd-sourcing site Freelancer.com, as well data from our own active job solicitations, we characterize the labor market involved in service abuse. We identify the largest classes of abuse work, including account creation, social networking link generation and search engine optimization support, and characterize how pricing and demand have evolved in supporting this activity.", "As human computation on crowdsourcing systems has become popular and powerful for performing tasks, malicious users have started misusing these systems by posting malicious tasks, propagating manipulated contents, and targeting popular web services such as online social networks and search engines. Recently, these malicious users moved to Fiverr, a fast-growing micro-task marketplace, where workers can post crowdturfing tasks (i.e., astroturfing campaigns run by crowd workers) and malicious customers can purchase those tasks for only $5. In this paper, we present a comprehensive analysis of Fiverr. First, we identify the most popular types of crowdturfing tasks found in this marketplace and conduct case studies for these crowdturfing tasks. Then, we build crowdturfing task detection classifiers to filter these tasks and prevent them from becoming active in the marketplace. Our experimental results show that the proposed classification approach effectively detects crowdturfing tasks, achieving 97.35 accuracy. Finally, we analyze the real world impact of crowdturfing tasks by purchasing active Fiverr tasks and quantifying their impact on a target site. As part of this analysis, we show that current security systems inadequately detect crowdsourced manipulation, which confirms the necessity of our proposed crowdturfing task detection approach." ] }
1907.09177
2963671871
Advanced neural language models (NLMs) are widely used in sequence generation tasks because they are able to produce fluent and meaningful sentences. They can also be used to generate fake reviews, which can then be used to attack online review systems and influence the buying decisions of online shoppers. A problem in fake review generation is how to generate the desired sentiment topic. Existing solutions first generate an initial review based on some keywords and then modify some of the words in the initial review so that the review has the desired sentiment topic. We overcome this problem by using the GPT-2 NLM to generate a large number of high-quality reviews based on a review with the desired sentiment and then using a BERT based text classifier (with accuracy of 96 ) to filter out reviews with undesired sentiments. Because none of the words in the review are modified, fluent samples like the training data can be generated from the learned distribution. A subjective evaluation with 80 participants demonstrated that this simple method can produce reviews that are as fluent as those written by people. It also showed that the participants tended to distinguish fake reviews randomly. Two countermeasures, GROVER and GLTR, were found to be able to accurately detect fake review.
@cite_24 proposed such an attack method. Their idea is to first generate an initial fake review based on a given keyword using a long short-term memory (LSTM)-based LM. Because the initial fake review is stochastically sampled from a learned distribution, it may be irrelevant to the desired context. Then specific nouns in the fake review are replaced with ones that better fit the desired context. @cite_8 proposed a similar method for generating fake reviews that further requires additional meta information such as shop name, location, rating, and etc.
{ "cite_N": [ "@cite_24", "@cite_8" ], "mid": [ "2752337926", "2802987538" ], "abstract": [ "Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on \"usefulness\" metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.", "Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47 ). We conduct a user study with experienced users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2 4 vs 1.5 4) with statistical significance, at level ( = 1 ) (Sect. 4.3). We develop very effective detection tools and reach average F-score of (97 ) in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible." ] }
1907.09328
2962816295
While search efficacy has been evaluated traditionally on the basis of result relevance, fairness of search has attracted recent attention. In this work, we define a notion of distributional fairness and provide a conceptual framework for evaluating search results based on it. As part of this, we formulate a set of axioms which an ideal evaluation framework should satisfy for distributional fairness. We show how existing TREC test collections can be repurposed to study fairness, and we measure potential data bias to inform test collection design for fair search. A set of analyses show metric divergence between relevance and fairness, and we describe a simple but flexible interpolation strategy for integrating relevance and fairness into a single metric for optimization and evaluation.
Researchers have proposed different methods to tackle the bias in IR systems . These approaches include new ranking algorithms taking fairness constraints into account , post-processing method for re-ranking existing systems considering both individual and group fairness , and evaluation of ranking systems in terms of fairness @cite_3 from a group fairness perspective. IBM-360 https: aif360.mybluemix.net is an industry standard for evaluating fairness in machine learning algorithms and datasets. However, it does not include measurements for ranking systems. Recent work such as has begun to explore evaluating search fairness.
{ "cite_N": [ "@cite_3" ], "mid": [ "2544318541" ], "abstract": [ "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others. In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy. The code implementing all parts of this work is publicly available at https: github.com DataResponsibly FairRank." ] }
1907.09271
2963401517
Deterministic finite automata are one of the simplest and most practical models of computation studied in automata theory. Their conceptual extension is the non-deterministic finite automata which also have plenty of applications. In this article, we study these models through the lens of succinct data structures where our ultimate goal is to encode these mathematical objects using information-theoretically optimal number of bits along with supporting queries on them efficiently. Towards this goal, we first design a succinct data structure for representing any deterministic finite automaton @math having @math states over a @math -letter alphabet @math using @math bits of space, which can determine, given an input string @math over @math , whether @math accepts @math in @math time, using constant words of working space. When the input deterministic finite automaton is acyclic, not only we can improve the above space-bound significantly to @math bits, we also obtain optimal query time for string acceptance checking. More specifically, using our succinct representation, we can check if a given input string @math can be accepted by the acyclic deterministic finite automaton using time proportional to the length of @math , hence, the optimal query time. We also exhibit a succinct data structure for representing a non-deterministic finite automaton @math having @math states over a @math -letter alphabet @math using @math bits of space, such that given an input string @math , we can decide whether @math accepts @math efficiently in @math time. Finally, we also provide time and space-efficient algorithms for performing several standard operations such as union, intersection, and complement on the languages accepted by deterministic finite automata.
The field of succinct data structures originally started with the work of Jacobson @cite_11 , and by now it is a relatively mature field in terms of breadth of problems considered. To illustrate this further, there already exists a large body of work on representing various combinatorial objects succinctly. A partial list of such combinatorial objects would be trees @cite_19 @cite_18 , various special graph classes like planar graphs @cite_6 , chordal graphs @cite_23 , partial @math -trees @cite_7 , interval graphs @cite_20 along with arbitrary general graphs @cite_0 , permutations @cite_21 , functions @cite_21 , bitvectors @cite_15 among many others. We refer the reader to the recent book by Navarro @cite_16 for a comprehensive treatment of this field. The study of succinct data structures is motivated by both theoretical curiosity and also by the practical needs as these combinatorial structures do arise quite often in various applications.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_21", "@cite_6", "@cite_0", "@cite_19", "@cite_23", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2147935317", "1988115395", "1987699222", "2024465063", "1993294649", "2093788424", "2907584067", "1974033543", "2517241835", "2964624700", "127947978" ], "abstract": [ "We propose new succinct representations of ordinal trees and match various space time lower bounds. It is known that any n-node static tree can be represented in 2n p o(n) bits so that a number of operations on the tree can be supported in constant time under the word-RAM model. However, the data structures are complicated and difficult to dynamize. We propose a simple and flexible data structure, called the range min-max tree, that reduces the large number of relevant tree operations considered in the literature to a few primitives that are carried out in constant time on polylog-sized trees. The result is extended to trees of arbitrary size, retaining constant time and reaching 2n p O(n polylog(n)) bits of space. This space is optimal for a core subset of the operations supported and significantly lower than in any previous proposal. For the dynamic case, where insertion deletion (indels) of nodes is allowed, the existing data structures support a very limited set of operations. Our data structure builds on the range min-max tree to achieve 2n p O(n log n) bits of space and O(log n) time for all operations supported in the static scenario, plus indels. We also propose an improved data structure using 2n p O(nlog log n log n) bits and improving the time to the optimal O(log n log log n) for most operations. We extend our support to forests, where whole subtrees can be attached to or detached from others, in time O(log1pe n) for any e > 0. Such operations had not been considered before. Our techniques are of independent interest. An immediate derivation yields an improved solution to range minimum maximum queries where consecutive elements differ by ± 1, achieving n p O(n polylog(n)) bits of space. A second one stores an array of numbers supporting operations sum and search and limited updates, in optimal time O(log n log log n). A third one allows representing dynamic bitmaps and sequences over alphabets of size σ, supporting rank select and indels, within zero-order entropy bounds and time O(log n log σ (log log n)2) for all operations. This time is the optimal O(log n log log n) on bitmaps and polylog-sized alphabets. This improves upon the best existing bounds for entropy-bounded storage of dynamic sequences, compressed full-text self-indexes, and compressed-space construction of the Burrows-Wheeler transform.", "Given an unlabeled, unweighted, and undirected graph with n vertices and small (but not necessarily constant) treewidth k, we consider the problem of preprocessing the graph to build space-efficient encodings (oracles) to perform various queries efficiently. We assume the word RAM model where the size of a word is Ω(logn) bits.", "We investigate the problem of succinctly representing an arbitrary permutation, @p, on 0,...,n-1 so that @p^k(i) can be computed quickly for any i and any (positive or negative) integer power k. A representation taking (1+@e)nlgn+O(1) bits suffices to compute arbitrary powers in constant time, for any positive constant @e@?1. A representation taking the optimal @?lgn!@?+o(n) bits can be used to compute arbitrary powers in O(lgn lglgn) time. We then consider the more general problem of succinctly representing an arbitrary function, f:[n]->[n] so that f^k(i) can be computed quickly for any i and any integer power k. We give a representation that takes (1+@e)nlgn+O(1) bits, for any positive constant @e@?1, and computes arbitrary positive powers in constant time. It can also be used to compute f^k(i), for any negative integer k, in optimal O(1+|f^k(i)|) time. We place emphasis on the redundancy, or the space beyond the information-theoretic lower bound that the data structure uses in order to support operations efficiently. A number of lower bounds have recently been shown on the redundancy of data structures. These lower bounds confirm the space-time optimality of some of our solutions. Furthermore, the redundancy of one of our structures ''surpasses'' a recent lower bound by Golynski [Golynski, SODA 2009], thus demonstrating the limitations of this lower bound.", "This paper addresses the problem of representing the connectivity information of geometric objects, using as little memory as possible. As opposed to raw compression issues, the focus here is on designing data structures that preserve the possibility of answering incidence queries in constant time. We propose, in particular, the first optimal representations for 3-connected planar graphs and triangulations, which are the most standard classes of graphs underlying meshes with spherical topology. Optimal means that these representations asymptotically match the respective entropy of the two classes, namely 2 bits per edge for 3-connected planar graphs, and 1.62 bits per triangle, or equivalently 3.24 bits per vertex for triangulations. These representations support adjacency queries between vertices and faces in constant time.", "We consider the problem of encoding graphs with n vertices and m edges compactly supporting adjacency, neighborhood and degree queries in constant time in the @Q(logn)-bit word RAM model. The adjacency query asks whether there is an edge between two vertices, the neighborhood query reports the neighbors of a given vertex in constant time per neighbor, and the degree query reports the number of incident edges to a given vertex. We study the problem in the context of succinctness, where the goal is to achieve the optimal space requirement as a function of n and m, to within lower order terms. We prove a lower bound in the cell probe model indicating it is impossible to achieve the information-theory lower bound up to lower order terms unless the graph is either too sparse (namely, m=o(n^@d) for any constant @d>0) or too dense (namely m=@w(n^2^-^@d) for any constant @d>0). Furthermore, we present a succinct encoding of graphs supporting aforementioned queries in constant time. The space requirement of the encoding is within a multiplicative 1+@e factor of the information-theory lower bound for any arbitrarily small constant @e>0. This is the best achievable space bound according to our lower bound where it applies. The space requirement of the representation achieves the information-theory lower bound tightly within lower order terms where the graph is very sparse (m=o(n^@d) for any constant @d>0), or very dense (m>n^2 lg^1^-^@dn for an arbitrarily small constant @d>0).", "We consider the implementation of abstract data types for the static objects: binary tree, rooted ordered tree, and a balanced sequence of parentheses. Our representations use an amount of space within a lower order term of the information theoretic minimum and support, in constant time, a richer set of navigational operations than has previously been considered in similar work. In the case of binary trees, for instance, we can move from a node to its left or right child or to the parent in constant time while retaining knowledge of the size of the subtree at which we are positioned. The approach is applied to produce a succinct representation of planar graphs in which one can test adjacency in constant time.", "", "We consider the indexable dictionary problem, which consists of storing a set S ⊆ 0,…,m − 1 for some integer m while supporting the operations of rank(x), which returns the number of elements in S that are less than x if x ∈ S, and −1 otherwise; and select(i), which returns the ith smallest element in S. We give a data structure that supports both operations in O(1) time on the RAM model and requires B(n, m) p o(n) p O(lg lg m) bits to store a set of size n, where B(n, m) e ⌊lg (m n)⌋ is the minimum number of bits required to store any n-element subset from a universe of size m. Previous dictionaries taking this space only supported (yes no) membership queries in O(1) time. In the cell probe model we can remove the O(lg lg m) additive term in the space bound, answering a question raised by Fich and Miltersen [1995] and Pagh [2001]. We present extensions and applications of our indexable dictionary data structure, including: —an information-theoretically optimal representation of a k-ary cardinal tree that supports standard operations in constant time; —a representation of a multiset of size n from 0,…,m − 1 in B(n, m p n) p o(n) bits that supports (appropriate generalizations of) rank and select operations in constant time; and p O(lg lg m) —a representation of a sequence of n nonnegative integers summing up to m in B(n, m p n) p o(n) bits that supports prefix sum queries in constant time.", "Compact data structures help represent data in reduced space while allowing it to be queried, navigated, and operated in compressed form. They are essential tools for efficiently handling massive amounts of data by exploiting the memory hierarchy. They also reduce the resources needed in distributed deployments and make better use of the limited memory in low-end devices. The field has developed rapidly, reaching a level of maturity that allows practitioners and researchers in application areas to benefit from the use of compact data structures. This first comprehensive book on the topic focuses on the structures that are most relevant for practical use. Readers will learn how the structures work, how to choose the right ones for their application scenario, and how to implement them. Researchers and students in the area will find in the book a definitive guide to the state of the art in compact data structures.", "", "Data compression is when you take a big chunk of data and crunch it down to fit into a smaller space. That data is put on ice; you have to un-crunch the compressed data to get at it. Data optimization, on the other hand, is when you take a chunk of data plus a collection of operations you can perform on that data, and crunch it into a smaller space while retaining the ability to perform the operations efficiently. This thesis investigates the problem of data optimization for some fundamental static data types, concentrating on linked data structures such as trees. I chose to restrict my attention to static data structures because they are easier to optimize since the optimization can be performed off-line. Data optimization comes in two different flavors: concrete and abstract. Concrete optimization finds minimal representations within a given implementation of a data structure; abstract optimization seeks implementations with guaranteed economy of space and time. I consider the problem of concrete optimization of various pointer-based implementations of trees and graphs. The only legitimate use of a pointer is as a reference, so we are free to map the pieces of a linked structure into memory as we choose. The problem is to find a mapping that maximizes overlap of the pieces, and hence minimizes the space they occupy. I solve the problem of finding a minimal representation for general unordered trees where pointers to children are stored in a block of consecutive locations. The algorithm presented is based on weighted matching. I also present an analysis showing that the average number of cons-cells required to store a binary tree of n nodes as a minimal binary DAG is asymptotic to @math lg @math . Methods for representing trees of n nodes in @math ( @math ) bits that allow efficient tree-traversal are presented. I develop tools for abstract optimization based on a succinct representation for ordered sets that supports ranking and selection. These tools are put to use in a building an @math ( @math )-bit data structure that represents n-node planar graphs, allowing efficient traversal and adjacency-testing." ] }
1907.09271
2963401517
Deterministic finite automata are one of the simplest and most practical models of computation studied in automata theory. Their conceptual extension is the non-deterministic finite automata which also have plenty of applications. In this article, we study these models through the lens of succinct data structures where our ultimate goal is to encode these mathematical objects using information-theoretically optimal number of bits along with supporting queries on them efficiently. Towards this goal, we first design a succinct data structure for representing any deterministic finite automaton @math having @math states over a @math -letter alphabet @math using @math bits of space, which can determine, given an input string @math over @math , whether @math accepts @math in @math time, using constant words of working space. When the input deterministic finite automaton is acyclic, not only we can improve the above space-bound significantly to @math bits, we also obtain optimal query time for string acceptance checking. More specifically, using our succinct representation, we can check if a given input string @math can be accepted by the acyclic deterministic finite automaton using time proportional to the length of @math , hence, the optimal query time. We also exhibit a succinct data structure for representing a non-deterministic finite automaton @math having @math states over a @math -letter alphabet @math using @math bits of space, such that given an input string @math , we can decide whether @math accepts @math efficiently in @math time. Finally, we also provide time and space-efficient algorithms for performing several standard operations such as union, intersection, and complement on the languages accepted by deterministic finite automata.
For DFA and NFA, other than the basic structure that is mentioned in the introduction, there exists many extensions variations in the literature, for example, two-way finite automata, B " u chi automata and many more. Researchers generally study the properties, limitations and applications of these mathematical structures. One such line of study that is particularly relevant to us for this paper is the research on counting DFAs and NFAs. Since the fifties there are plenty of attempts in exactly counting the number of DFAs and NFAs with @math states over the alphabet @math , and the state-of-the-art result is due to @cite_12 for DFAs and @cite_3 for NFAs respectively. We refer the readers to the survery (and the references therein) of Domaratzki @cite_4 for more details. Basically, from these results, we can deduce the information theoretic lower bounds on the number of bits required to represent any DFA or NFA. Then we augment these lower bounds by designing data structures whose size matches the lower bounds, hence consuming optimal space, along with capable of executing algorithms efficiently using this succinct representation, and this is the main contribution of this paper.
{ "cite_N": [ "@cite_3", "@cite_4", "@cite_12" ], "mid": [ "136845799", "2404583597", "2013433986" ], "abstract": [ "We give asymptotic estimates and some explicit computations for both the number of distinct languages and the number of distinct finite languages over a k-letter alphabet that are accepted by deterministic finite automata (resp. nondeterministic finite automata) with n states.", "", "We present a bijection between the set A\"n of deterministic and accessible automata with n states on a k-letters alphabet and some diagrams, which can themselves be represented as partitions of a set of kn+1 elements into n non-empty subsets. This combinatorial construction shows that the asymptotic order of the cardinality of A\"n is related to the Stirling number knn . Our bijective approach also yields an efficient random sampler, for the uniform distribution, of automata with n states, its complexity is O(n^3^ ^2), using the framework of Boltzmann samplers." ] }
1907.09160
2964033912
Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94 on CASMEII, which is 6.6 higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7 to 63.44 on the SAMM dataset.
Feature representation approaches of ME recognition can be divided into two distinct categories: geometric-based and appearance-based @cite_11 methods. Specifically, geometric-based features describe the face geometry such as the shapes and locations of facial landmarks, so they need precise landmarking and alignment procedures. By contrast, appearance-based features describe intensity and textural information such as wrinkles and shading changes, and they are more robust to illumination changes and alignment error. Thus, appearance-based feature representation methods, including LBPTOP @cite_45 , HOG 3D @cite_6 , HOOF @cite_28 and deep learning, have been more popular in ME recognition @cite_51 .
{ "cite_N": [ "@cite_28", "@cite_6", "@cite_45", "@cite_51", "@cite_11" ], "mid": [ "2128730107", "2538953432", "2139916508", "2808103905", "2156503193" ], "abstract": [ "System theoretic approaches to action recognition model the dynamics of a scene with linear dynamical systems (LDSs) and perform classification using metrics on the space of LDSs, e.g. Binet-Cauchy kernels. However, such approaches are only applicable to time series data living in a Euclidean space, e.g. joint trajectories extracted from motion capture data or feature point trajectories extracted from video. Much of the success of recent object recognition techniques relies on the use of more complex feature descriptors, such as SIFT descriptors or HOG descriptors, which are essentially histograms. Since histograms live in a non-Euclidean space, we can no longer model their temporal evolution with LDSs, nor can we classify them using a metric for LDSs. In this paper, we propose to represent each frame of a video using a histogram of oriented optical flow (HOOF) and to recognize human actions by classifying HOOF time-series. For this purpose, we propose a generalization of the Binet-Cauchy kernels to nonlinear dynamical systems (NLDS) whose output lives in a non-Euclidean space, e.g. the space of histograms. This can be achieved by using kernels defined on the original non-Euclidean space, leading to a well-defined metric for NLDSs. We use these kernels for the classification of actions in video sequences using (HOOF) as the output of the NLDS. We evaluate our approach to recognition of human actions in several scenarios and achieve encouraging results.", "Facial micro-expressions were proven to be an important behaviour source for hostile intent and danger demeanour detection. In this paper, we present a novel approach for facial micro-expressions recognition in video sequences. First, 200 frame per second (fps) high speed camera is used to capture the face. Second, the face is divided to specific regions, then the motion in each region is recognized based on 3D-Gradients orientation histogram descriptor. For testing this approach, we create a new dataset of facial micro-expressions, that was manually tagged as a ground truth, using a high speed camera. In this work, we present recognition results of 13 different micro-expressions. (6 pages)", "Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation", "Over the last few years, automatic facial micro-expression analysis has garnered increasing attention from experts across different disciplines because of its potential applications in various fields such as clinical diagnosis, forensic investigation and security systems. Advances in computer algorithms and video acquisition technology have rendered machine analysis of facial micro-expressions possible today. Although the study of facial micro-expressions is a well-established field in psychology, it is still relatively new from the computational perspective with many interesting problems. In this survey, we present a comprehensive review of state-of-the-art databases and methods for micro-expressions spotting and recognition. Individual stages involved in the automation of these tasks are also described and reviewed at length. In addition, we also deliberate on the challenges and future directions in this growing field of automatic facial micro-expression analysis.", "Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology." ] }
1907.09160
2964033912
Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94 on CASMEII, which is 6.6 higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7 to 63.44 on the SAMM dataset.
Since the pioneering work by Pfister @cite_37 , LBPTOP has emerged as the most popular approach for spontaneous ME analysis, and quite a few variants have been proposed. LBP Six Interception Points (LBPSIP) @cite_16 is based on three intersecting lines crossing over the center point. LBP Mean Orthogonal Planes (LBP-MOP) @cite_43 first computes an average plane for three orthogonal planes, and then computes the LBP on the three orthogonal average planes. By reducing redundant information, LBPSIP and LBPMOP achieved better performance. @cite_0 explores two effective binary face descriptors: Hot Wheel Patterns @cite_0 and Dual-Cross Patterns @cite_44 and makes use of abundant labelled micro-expressions. Besides computing the sign of pixel differences, Spatio-Temporal Completed Local Quantized Patterns (STCLQP) @cite_31 also exploits the complementary components of magnitudes and orientations. Decorrelated Local Spatiotemporal Directional Features (DLSTD) @cite_17 uses Robust Principal Component Analysis (RPCA) @cite_23 to extract subtle emotion information and division of 16 Regions of Interest (ROIs) to utilize the Action Unit (AU) information. Spatio-Temporal Local Radon Binary Pattern (STRBP) @cite_14 uses Radon Transform to obtain robust shape features, while Spatiotemporal Local Binary Pattern with Integral Projection (STLBP-IP) @cite_50 turns to integral projections to preserve shape attributes.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_0", "@cite_43", "@cite_44", "@cite_23", "@cite_50", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "2059068649", "2782644834", "2737463186", "234798790", "1901075642", "2131628350", "2237362194", "2156489769", "2263218431", "565148957" ], "abstract": [ "Facial micro-expressions are rapid involuntary facial expressions which reveal suppressed affect. To the best knowledge of the authors, there is no previous work that successfully recognises spontaneous facial micro-expressions. In this paper we show how a temporal interpolation model together with the first comprehensive spontaneous micro-expression corpus enable us to accurately recognise these very short expressions. We designed an induced emotion suppression experiment to collect the new corpus using a high-speed camera. The system is the first to recognise spontaneous facial micro-expressions and achieves very promising results that compare favourably with the human micro-expression detection accuracy.", "Micro-expressions are difficult to be observed by human beings due to its low intensity and short duration. Recently, several works have been developed to resolve the problems of micro-expression recognition caused by subtle intensity and short duration. One of them, Local binary pattern from three orthogonal planes (LBP-TOP) is primarily used to recognize micro-expression from the video recorded by high-speed camera. Several variances of LBP-TOP have also been developed to promisingly improve the performance of LBP-TOP for microexpression recognition. However, these variances of LBP-TOP including LBP-TOP cannot well extract the subtle movements of micro-expression so that they have the low performance. In this paper, we propose spontaneous local radon-based binary pattern to analyze micro-expressions with subtle facial movements. Firstly, it extracts the sparse information by using robust principal component analysis since micro-expression data are sparse in both temporal and spatial domains caused by short duration and low intensity. These sparse information can provide much motion information to dynamic feature descriptor. Furthermore, it employs radon transform to obtain the shape features from three orthogonal planes, as radon transform is robustness to the same histogram distribution of two images. Finally, one-dimensional LBP is employed in these shape features for constructing the spatiotemporal features for microexpression video. Intensive experiments are conducted on two available published micro-expression databases including SMIC and CASME2 databases for evaluating the performance of the proposed method. Experimental results demonstrate that the proposed method achieves promising performance in microexpression recognition.", "Abstract In this paper, we propose three effective binary face descriptor learning methods, namely dual-cross patterns from three orthogonal planes (DCP-TOP), hot wheel patterns (HWP) and HWP-TOP for macro micro-expression representation. We use feature selection to make the binary descriptors compact. Because of the limited labeled micro-expression samples, we leverage abundant labeled macro-expression and speech samples to train a more accurate classifier. Coupled metric learning algorithm is employed to model the shared features between micro-expression samples and macro-information. Smooth SVM (SSVM) is selected as a classifier to evaluate the performance of micro-expression recognition. Extensive experimental results show that our proposed methods yield the state-of-the-art classification accuracies on the CASMEII database.", "Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets—SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP) and a super-compact LBP-Three Mean Orthogonal Planes (MOP) not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency.", "To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract “Multi-Directional Multi-Level Dual-Cross Patterns” (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.", "Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis. However, its performance and applicability in real scenarios are limited by a lack of robustness to outlying or corrupted observations. This paper considers the idealized \"robust principal component analysis\" problem of recovering a low rank matrix A from corrupted observations D = A + E. Here, the corrupted entries E are unknown and the errors can be arbitrarily large (modeling grossly corrupted observations common in visual and bioinformatic data), but are assumed to be sparse. We prove that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which we give a fast and provably convergent algorithm. Our result holds even when the rank of A grows nearly proportionally (up to a logarithmic factor) to the dimensionality of the observation space and the number of errors E grows in proportion to the total number of entries in the matrix. A by-product of our analysis is the first proportional growth results for the related problem of completing a low-rank matrix from a small fraction of its entries. Simulations and real-data examples corroborate the theoretical results, and suggest potential applications in computer vision.", "Recently, there are increasing interests in inferring mirco-expression from facial image sequences. For micro-expression recognition, feature extraction is an important critical issue. In this paper, we proposes a novel framework based on a new spatiotemporal facial representation to analyze micro-expressions with subtle facial movement. Firstly, an integral projection method based on difference images is utilized for obtaining horizontal and vertical projection, which can preserve the shape attributes of facial images and increase the discrimination for micro-expressions. Furthermore, we employ the local binary pattern operators to extract the appearance and motion features on horizontal and vertical projections. Intensive experiments are conducted on three available published micro-expression databases for evaluating the performance of the method. Experimental results demonstrate that the new spatiotemporal descriptor can achieve promising performance in micro-expression recognition.", "Spontaneous facial micro-expression analysis has become an active task for recognizing suppressed and involuntary facial expressions shown on the face of humans. Recently, Local Binary Pattern from Three Orthogonal Planes (LBP-TOP) has been employed for micro-expression analysis. However, LBP-TOP suffers from two critical problems, causing a decrease in the performance of micro-expression analysis. It generally extracts appearance and motion features from the sign-based difference between two pixels but not yet considers other useful information. As well, LBP-TOP commonly uses classical pattern types which may be not optimal for local structure in some applications. This paper proposes SpatioTemporal Completed Local Quantization Patterns (STCLQP) for facial micro-expression analysis. Firstly, STCLQP extracts three interesting information containing sign, magnitude and orientation components. Secondly, an efficient vector quantization and codebook selection are developed for each component in appearance and temporal domains to learn compact and discriminative codebooks for generalizing classical pattern types. Finally, based on discriminative codebooks, spatiotemporal features of sign, magnitude and orientation components are extracted and fused. Experiments are conducted on three publicly available facial micro-expression databases. Some interesting findings about the neighboring patterns and the component analysis are concluded. Comparing with the state of the art, experimental results demonstrate that STCLQP achieves a substantial improvement for analyzing facial micro-expressions. HighlightsWe propose spatiotemporal completed local quantized pattern for micro-expression analysis.We propose to use three useful information, including the sign-based, magnitude-based and orientation-based difference of pixels for LBP.We propose to use an efficient vector quantization and discriminative codebook selection to make LBP-TOP more discriminative and compact.We evaluate the framework on three publicly available facial micro-expression databases.We evaluate the influence of parameters, different components and codebook selection to spatiotemporal completed local quantized pattern.", "Facial micro-expression recognition is an upcoming area in computer vision research. Up until the recent emergence of the extensive CASMEII spontaneous micro-expression database, there were numerous obstacles faced in the elicitation and labeling of data involving facial micro-expressions. In this paper, we propose the Local Binary Patterns with Six Intersection Points (LBP-SIP) volumetric descriptor based on the three intersecting lines crossing over the center point. The proposed LBP-SIP reduces the redundancy in LBP-TOP patterns, providing a more compact and lightweight representation; leading to more efficient computational complexity. Furthermore, we also incorporated a Gaussian multi-resolution pyramid to our proposed approach by concatenating the patterns across all pyramid levels. Using an SVM classifier with leave-one-sample-out cross validation, we achieve the best recognition accuracy of 67.21 , surpassing the baseline performance with further computational efficiency.", "One of important cues of deception detection is micro-expression. It has three characteristics: short duration, low intensity and usually local movements. These characteristics imply that micro-expression is sparse. In this paper, we use the sparse part of Robust PCA (RPCA) to extract the subtle motion information of micro-expression. The local texture features of the information are extracted by Local Spatiotemporal Directional Features (LSTD). In order to extract more effective local features, 16 Regions of Interest (ROIs) are assigned based on the Facial Action Coding System (FACS). The experimental results on two micro-expression databases show the proposed method gain better performance. Moreover, the proposed method may further be used to extract other subtle motion information (such as lip-reading, the human pulse, and micro-gesture etc.) from video." ] }
1907.09160
2964033912
Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94 on CASMEII, which is 6.6 higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7 to 63.44 on the SAMM dataset.
@cite_35 adopts a shallow network with Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM). Other neural networks are explored in Dual Temporal Scale Convolutional Neural Network (DTSCNN) @cite_56 , 3D Flow Convolutional neural network (3DFCNN) @cite_15 and Micro-Expression Recognition algorithm using Recurrent CNNs (MER-RCNN) @cite_5 . These methods achieve some improvements in ME recognition, but they are still significantly below state of the art handcrafted features, mainly due to lack of large scale ME data.
{ "cite_N": [ "@cite_35", "@cite_15", "@cite_5", "@cite_56" ], "mid": [ "2527254703", "2900180747", "2910652840", "2760370272" ], "abstract": [ "Recognizing spontaneous micro-expression in video sequences is a challenging problem. In this paper, we propose a new method of small scale spatio-temporal feature learning. The proposed learning method consists of two parts. First, the spatial features of micro-expressions at different expression-states (i.e., onset, onset to apex transition, apex, apex to offset transition and offset) are encoded using convolutional neural networks (CNN). The expression-states are taken into account in the objective functions, to improve the expression class separability of the learned feature representation. Next, the learned spatial features with expression-state constraints are transferred to learn temporal features of micro-expression. The temporal feature learning encodes the temporal characteristics of the different states of the micro-expression using long short-term memory (LSTM) recurrent neural networks. Extensive and comprehensive experiments have been conducted on the publically available CASME II micro-expression dataset. The experimental results showed that the proposed method outperformed state-of-the-art micro-expression recognition methods in terms of recognition accuracy.", "Micro-expression recognition (MER) is a growing field of research which is currently in its early stage of development. Unlike conventional macro-expressions, micro-expressions occur at a very short duration and are elicited in a spontaneous manner from emotional stimuli. While existing methods for solving MER are largely non-deep-learning-based methods, deep convolutional neural network (CNN) has shown to work very well on such as face recognition, facial expression recognition, and action recognition. In this article, we propose applying the 3D flow-based CNNs model for video-based micro-expression recognition, which extracts deeply learned features that are able to characterize fine motion flow arising from minute facial movements. Results from comprehensive experiments on three benchmark datasets—SMIC, CASME CASME II, showed a marked improvement over state-of-the-art methods, hence proving the effectiveness of our fairly easy CNN model as the deep learning benchmark for facial MER.", "The automatic recognition of spontaneous facial micro-expressions becomes prevalent as it reveals the actual emotion of humans. However, handcrafted features employed for recognizing micro-expressions are designed for general applications and thus cannot well capture the subtle facial deformations of micro-expressions. To address this problem, we propose an end-to-end deep learning framework to suit the particular needs of micro-expression recognition (MER). In the deep model, re- current convolutional networks are utilized to learn the representation of subtle changes from image sequences. To guarantee the learning of deep model, we present a temporal jittering procedure to greatly enrich the training samples. Through performing the experiments on three spontaneous micro-expression datasets, i.e., SMIC, CASME, and CASME2, we verify the effectiveness of our proposed MER approach.", "Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DTSCNN consists of an independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I II) showed that our method can achieve a recognition rate almost 10 higher than what some state-of-the-art method can achieve." ] }
1907.09160
2964033912
Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94 on CASMEII, which is 6.6 higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7 to 63.44 on the SAMM dataset.
LBP was firstly proposed in @cite_41 , and a completed version was developed in @cite_9 . Later on it was introduced to face recognition in @cite_8 and its 3D extended version LBPTOP was proposed in @cite_45 with application to facial expression analysis.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_45", "@cite_8" ], "mid": [ "2039051707", "2163352848", "2139916508", "2163808566" ], "abstract": [ "This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented", "Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed \"uniform,\" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the \"uniform\" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.", "Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation", "This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed" ] }
1907.09160
2964033912
Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94 on CASMEII, which is 6.6 higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7 to 63.44 on the SAMM dataset.
LBPTOP @cite_45 is the 3D extension of LBP by extracting LBP patterns separately from three orthogonal planes: the spatial plane (XY) similar to the regular LBP, the vertical spatiotemporal plane (YT) and the horizontal spatiotemporal plane (XT), as illustrated in Figure (b).
{ "cite_N": [ "@cite_45" ], "mid": [ "2139916508" ], "abstract": [ "Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation" ] }
1907.09173
2964156559
With the rapid development of computing technology, wearable devices such as smart phones and wristbands make it easy to get access to people's health information including activities, sleep, sports, etc. Smart healthcare achieves great success by training machine learning models on a large quantity of user data. However, there are two critical challenges. Firstly, user data often exists in the form of isolated islands, making it difficult to perform aggregation without compromising privacy security. Secondly, the models trained on the cloud fail on personalization. In this paper, we propose FedHealth, the first federated transfer learning framework for wearable healthcare to tackle these challenges. FedHealth performs data aggregation through federated learning, and then builds personalized models by transfer learning. It is able to achieve accurate and personalized healthcare without compromising privacy and security. Experiments demonstrate that FedHealth produces higher accuracy (5.3 improvement) for wearable activity recognition when compared to traditional methods. FedHealth is general and extensible and has the potential to be used in many healthcare applications.
A comprehensive survey on federated learning is in @cite_45 . Federated machine learning was firstly proposed by Google @cite_38 @cite_38 , where they trained machine learning models based on distributed mobile phones all over the world. The key idea is to protect user data during the process. Since then, other researchers started to focus on privacy-preserving machine learning @cite_28 @cite_13 @cite_11 , federated multi-task learning @cite_26 , as well as personalized federated learning @cite_27 . Federated learning has the ability to resolve the data islanding problems by privacy-preserving model training in the network.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_28", "@cite_27", "@cite_45", "@cite_13", "@cite_11" ], "mid": [ "2530417694", "", "2767079719", "2788629937", "2912213068", "", "2777914285" ], "abstract": [ "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.", "", "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.", "Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-specific recommendation models are locally trained by a shared parameterized algorithm, which preserves user privacy and at the same time utilizes information from other users to help model training. Interestingly, the model thus trained exhibits a high capacity at a small scale, which is energy- and communication-efficient. Experimental results show that recommendation models trained by meta-learning algorithms in the proposed framework outperform the state-of-the-art in accuracy and scale. For example, on a production dataset, a shared model under Google Federated Learning (, 2017) with 900,000 parameters has prediction accuracy 76.72 , while a shared algorithm under federated meta-learning with less than 30,000 parameters achieves accuracy of 86.23 .", "Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federated learning, and federated transfer learning. We provide definitions, architectures, and applications for the federated-learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allowing knowledge to be shared without compromising user privacy.", "", "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance." ] }
1907.09173
2964156559
With the rapid development of computing technology, wearable devices such as smart phones and wristbands make it easy to get access to people's health information including activities, sleep, sports, etc. Smart healthcare achieves great success by training machine learning models on a large quantity of user data. However, there are two critical challenges. Firstly, user data often exists in the form of isolated islands, making it difficult to perform aggregation without compromising privacy security. Secondly, the models trained on the cloud fail on personalization. In this paper, we propose FedHealth, the first federated transfer learning framework for wearable healthcare to tackle these challenges. FedHealth performs data aggregation through federated learning, and then builds personalized models by transfer learning. It is able to achieve accurate and personalized healthcare without compromising privacy and security. Experiments demonstrate that FedHealth produces higher accuracy (5.3 improvement) for wearable activity recognition when compared to traditional methods. FedHealth is general and extensible and has the potential to be used in many healthcare applications.
According to @cite_45 , federated learning can mainly be classified into three types: 1) horizontal federated learning, where organizations share partial features; 2) vertical federated learning, where organizations share partial samples; and 3) federated transfer learning, where neither samples or features have much in common. FedHealth belongs to federated transfer learning category. It is the first of its kind tailored for wearable healthcare applications.
{ "cite_N": [ "@cite_45" ], "mid": [ "2912213068" ], "abstract": [ "Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federated learning, and federated transfer learning. We provide definitions, architectures, and applications for the federated-learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allowing knowledge to be shared without compromising user privacy." ] }
1907.09173
2964156559
With the rapid development of computing technology, wearable devices such as smart phones and wristbands make it easy to get access to people's health information including activities, sleep, sports, etc. Smart healthcare achieves great success by training machine learning models on a large quantity of user data. However, there are two critical challenges. Firstly, user data often exists in the form of isolated islands, making it difficult to perform aggregation without compromising privacy security. Secondly, the models trained on the cloud fail on personalization. In this paper, we propose FedHealth, the first federated transfer learning framework for wearable healthcare to tackle these challenges. FedHealth performs data aggregation through federated learning, and then builds personalized models by transfer learning. It is able to achieve accurate and personalized healthcare without compromising privacy and security. Experiments demonstrate that FedHealth produces higher accuracy (5.3 improvement) for wearable activity recognition when compared to traditional methods. FedHealth is general and extensible and has the potential to be used in many healthcare applications.
Transfer learning aims at transferring knowledge from existing domains to a new domain. In the setting of transfer learning, the domains are often different but related, which makes knowledge transfer possible. The key idea is to reduce the distribution divergence between different domains. To this end, there are mainly two kinds of approaches: 1) instance reweighting @cite_39 @cite_32 , which reuses samples from the source domain according to some weighting technique; and 2) feature matching, which either performs subspace learning by exploiting the subspace geometrical structure @cite_0 @cite_7 @cite_43 @cite_34 , or distribution alignment to reduce the marginal or conditional distribution divergence between domains @cite_21 @cite_46 @cite_18 @cite_40 @cite_1 . Recently, deep transfer learning methods have made considerable success in many application fields @cite_16 @cite_19 @cite_14 . For a complete survey, please refer to @cite_5 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_21", "@cite_1", "@cite_32", "@cite_16", "@cite_39", "@cite_0", "@cite_43", "@cite_40", "@cite_19", "@cite_5", "@cite_46", "@cite_34" ], "mid": [ "2773817138", "1565327149", "2963275094", "2965843558", "", "2811380766", "2312004824", "1990579857", "2884771968", "2526468814", "2115403315", "2963826681", "", "", "" ], "abstract": [ "Transfer learning has achieved promising results by leveraging knowledge from the source domain to annotate the target domain which has few or none labels. Existing methods often seek to minimize the distribution divergence between domains, such as the marginal distribution, the conditional distribution or both. However, these two distances are often treated equally in existing algorithms, which will result in poor performance in real applications. Moreover, existing methods usually assume that the dataset is balanced, which also limits their performances on imbalanced tasks that are quite common in real problems. To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution A daptation (BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA. Based on BDA, we also propose a novel Weighted Balanced Distribution Adaptation (W-BDA) algorithm to tackle the class imbalance issue in transfer learning. W-BDA not only considers the distribution adaptation between domains but also adaptively changes the weight of each class. To evaluate the proposed methods, we conduct extensive experiments on several transfer learning tasks, which demonstrate the effectiveness of our proposed algorithms over several state-of-the-art methods.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being \"frustratingly easy\" to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple–it can be implemented in four lines of Matlab code–CORAL performs remarkably well in extensive evaluations on standard benchmark datasets.", "Transfer learning aims at transferring knowledge from a well-labeled domain to a similar but different domain with limited or no labels. Unfortunately, existing learning-based methods often involve intensive model selection and hyperparameter tuning to obtain good results. Moreover, cross-validation is not possible for tuning hyperparameters since there are often no labels in the target domain. This would restrict wide applicability of transfer learning especially in computationally-constraint devices such as wearables. In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance. By exploiting intra-domain structures, EasyTL is able to learn both non-parametric transfer features and classifiers. Extensive experiments demonstrate that, compared to state-of-the-art traditional and deep methods, EasyTL satisfies the Occam's Razor principle: it is extremely easy to implement and use while achieving comparable or better performance in classification accuracy and much better computational efficiency. Additionally, it is shown that EasyTL can increase the performance of existing transfer feature learning methods.", "", "We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.", "The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared . We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.", "Transfer learning aims at adapting a classifier trained on one domain with adequate labeled samples to a new domain where samples are from a different distribution and have no class labels. In this paper, we explore the transfer learning problems with multiple data sources and present a novel boosting algorithm, SharedBoost. This novel algorithm is capable of applying for very high dimensional data such as in text mining where the feature dimension is beyond several ten thousands. The experimental results illustrate that the SharedBoost algorithm significantly outperforms the traditional methods which transfer knowledge with supervised learning techniques. Besides, SharedBoost also provides much better classification accuracy and more stable performance than some other typical transfer learning methods such as the structural correspondence learning (SCL) and the structural learning in the multiple sources transfer learning problems.", "Visual domain adaptation aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning. However, there are two significant challenges: (1) degenerated feature transformation, which means that distribution alignment is often performed in the original feature space, where feature distortions are hard to overcome. On the other hand, subspace learning is not sufficient to reduce the distribution divergence. (2) unevaluated distribution alignment, which means that existing distribution alignment methods only align the marginal and conditional distributions with equal importance, while they fail to evaluate the different importance of these two distributions in real applications. In this paper, we propose a Manifold Embedded Distribution Alignment (MEDA) approach to address these challenges. MEDA learns a domain-invariant classifier in Grassmann manifold with structural risk minimization, while performing dynamic distribution alignment to quantitatively account for the relative importance of marginal and conditional distributions. To the best of our knowledge, MEDA is the first attempt to perform dynamic distribution alignment for manifold domain adaptation. Extensive experiments demonstrate that MEDA shows significant improvements in classification accuracy compared to state-of-the-art traditional and deep methods.", "", "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "", "", "" ] }
1901.08933
2906424389
Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manually-labelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network's performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at this https URL .
The aim of multi-task learning (MTL) is to achieve shared representations by simultaneously training a set of related learning tasks. In this case, the learned knowledge used to share across domains is encoded into the feature representations to improve performance of each individual task, since knowledge distilled from related tasks are interdependent. The success of deep neural networks has led to some recent methods advancing the multi-task architecture design, such as applying a linear combination of task-specific features . @cite_4 applied soft-attention modules as feature selectors, allowing learning of both task-shared and task-specific features in an end-to-end manner. Transfer learning is another common approach to improve generalisation, by incorporating knowledge learned from one or more related domains. Pre-training a model with a large-scale dataset such as ImageNet has become a standard practise in many vision-based applications.
{ "cite_N": [ "@cite_4" ], "mid": [ "2795042520" ], "abstract": [ "In this paper, we propose a novel multi-task learning architecture, which incorporates recent advances in attention mechanisms. Our approach, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with task-specific soft-attention modules, which are trainable in an end-to-end manner. These attention modules allow for learning of task-specific features from the global pool, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. Experiments on the CityScapes dataset show that our method outperforms several baselines in both single-task and multi-task learning, and is also more robust to the various weighting schemes in the multi-task loss function. We further explore the effectiveness of our method through experiments over a range of task complexities, and show how our method scales well with task complexity compared to baselines." ] }
1901.08933
2906424389
Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manually-labelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network's performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at this https URL .
Meta learning (or learning to learn) aims to induce the learning algorithm itself. Early works in meta learning explored automatically learning update rules for neural models . Recent approaches have focussed on learning optimisers for deep networks based on LSTMs or synthetic gradients . Meta learning has also been studied for finding optimal hyper-parameters and a good initialisation for few-shot learning . also investigated few shot learning via an external memory module. @cite_3 @cite_12 realised few shot learning in the instance space via a differentiable nearest-neighbour approach. Related to meta learning, our framework is designed to learn to generate useful auxiliary labels, which themselves are used in another learning procedure.
{ "cite_N": [ "@cite_12", "@cite_3" ], "mid": [ "2601450892", "2963341924" ], "abstract": [ "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset.", "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank." ] }
1901.09005
2951884559
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging benchmarks. A large number of the pretext tasks for self-supervised learning have been studied, but other important aspects, such as the choice of convolutional neural networks (CNN), has not received equal attention. Therefore, we revisit numerous previously proposed self-supervised models, conduct a thorough large scale study and, as a result, uncover multiple crucial insights. We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning. As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin.
In this paper we focus on self-supervised techniques that learn from image databases. These techniques have demonstrated impressive results for learning high-level image representations. Inspired by unsupervised methods from the natural language processing domain which rely on predicting words from their context @cite_41 , Doersch al @cite_26 proposed a practically successful pretext task of predicting the relative location of image patches. This work spawned a line of work in patch-based self-supervised visual representation learning methods. These include a model from @cite_16 that predicts the permutation of a jigsaw puzzle'' created from the full image and recent follow-ups @cite_15 @cite_32 .
{ "cite_N": [ "@cite_26", "@cite_41", "@cite_32", "@cite_15", "@cite_16" ], "mid": [ "343636949", "1614298861", "2963465221", "2963103975", "2321533354" ], "abstract": [ "This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.", "", "In self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9 to 2.6 in object detection on PASCAL VOC 2007.", "We develop a set of methods to improve on the results of self-supervised learning using context. We start with a baseline of patch based arrangement context learning and go from there. Our methods address some overt problems such as chromatic aberration as well as other potential problems such as spatial skew and mid-level feature neglect. We prevent problems with testing generalization on common self-supervised benchmark tests by using different datasets during our development. The results of our methods combined yield top scores on all standard self-supervised benchmarks, including classification and detection on PASCAL VOC 2007, segmentation on PASCAL VOC 2012, and \"linear tests\" on the ImageNet and CSAIL Places datasets. We obtain an improvement over our baseline method of between 4.0 to 7.1 percentage points on transfer learning classification tests. We also show results on different standard network architectures to demonstrate generalization as well as portability. All data, models and programs are available at: https: gdo-datasci.llnl.gov selfsupervised .", "We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively)." ] }
1901.09005
2951884559
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging benchmarks. A large number of the pretext tasks for self-supervised learning have been studied, but other important aspects, such as the choice of convolutional neural networks (CNN), has not received equal attention. Therefore, we revisit numerous previously proposed self-supervised models, conduct a thorough large scale study and, as a result, uncover multiple crucial insights. We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning. As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin.
In contrast to patch-based methods, some methods generate cleverly designed image-level classification tasks. For instance, in @cite_13 Gidaris al propose to randomly rotate an image by one of four possible angles and let the model predict that rotation. Another way to create class labels is to use clustering of the images @cite_39 . Yet another class of pretext tasks contains tasks with dense spatial outputs. Some prominent examples are image inpainting @cite_31 , image colorization @cite_22 , its improved variant split-brain @cite_37 and motion segmentation prediction @cite_45 . Other methods instead enforce structural constraints on the representation space. Noroozi al propose an equivariance relation to match the sum of multiple tiled representations to a single scaled representation @cite_12 . Authors of @cite_44 propose to predict future patches in representation space via autoregressive predictive coding.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_39", "@cite_44", "@cite_45", "@cite_31", "@cite_13", "@cite_12" ], "mid": [ "2949532563", "2326925005", "2883725317", "2842511635", "2575671312", "2963420272", "2962742544", "2750549109" ], "abstract": [ "We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.", "Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.", "Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.", "While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.", "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.", "We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks." ] }
1901.09005
2951884559
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging benchmarks. A large number of the pretext tasks for self-supervised learning have been studied, but other important aspects, such as the choice of convolutional neural networks (CNN), has not received equal attention. Therefore, we revisit numerous previously proposed self-supervised models, conduct a thorough large scale study and, as a result, uncover multiple crucial insights. We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning. As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin.
Finally, many works have tried to combine multiple pretext tasks in one way or another. For instance, Kim al extend the jigsaw puzzle'' task by combining it with colorization and inpainting in @cite_17 . Combining the jigsaw puzzle task with clustering-based pseudo labels as in @cite_39 leads to the method called Jigsaw++ @cite_32 . Doersch and Zisserman @cite_5 implement four different self-supervision methods and make one single neural network learn all of them in a multi-task setting.
{ "cite_N": [ "@cite_5", "@cite_32", "@cite_39", "@cite_17" ], "mid": [ "", "2963465221", "2883725317", "2963826423" ], "abstract": [ "", "In self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9 to 2.6 in object detection on PASCAL VOC 2007.", "Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.", "In this paper, we explore methods of complicating selfsupervised tasks for representation learning. That is, we do severe damage to data and encourage a network to recover them. First, we complicate each of three powerful self-supervised task candidates: jigsaw puzzle, inpainting, and colorization. In addition, we introduce a novel complicated self-supervised task called \"Completing damaged jigsaw puzzles\" which is puzzles with one piece missing and the other pieces without color. We train a convolutional neural network not only to solve the puzzles, but also generate the missing content and colorize the puzzles. The recovery of the aforementioned damage pushes the network to obtain robust and general-purpose representations. We demonstrate that complicating the self-supervised tasks improves their original versions and that our final task learns more robust and transferable representations compared to the previous methods, as well as the simple combination of our candidate tasks. Our approach achieves state-of-the-art performance in transfer learning on PASCAL classification and semantic segmentation." ] }
1901.09005
2951884559
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging benchmarks. A large number of the pretext tasks for self-supervised learning have been studied, but other important aspects, such as the choice of convolutional neural networks (CNN), has not received equal attention. Therefore, we revisit numerous previously proposed self-supervised models, conduct a thorough large scale study and, as a result, uncover multiple crucial insights. We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning. As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin.
The latter work is similar to ours since it contains a comparison of different self-supervision methods using a unified neural network architecture, but with the goal of combining all these tasks into a single self-supervision task. The authors use a modified ResNet101 architecture @cite_9 without further investigation and explore the combination of multiple tasks, whereas our focus lies on investigating the influence of architecture design on the representation quality.
{ "cite_N": [ "@cite_9" ], "mid": [ "2194775991" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1901.09118
2915019010
During active learning, an effective stopping method allows users to limit the number of annotations, which is cost effective. In this paper, a new stopping method called Predicted Change of F Measure will be introduced that attempts to provide the users an estimate of how much performance of the model is changing at each iteration. This stopping method can be applied with any base learner. This method is useful for reducing the data annotation bottleneck encountered when building text classification systems.
In 2008, Laws and Schutze proposed two stopping methods called performance convergence and uncertainty convergence @cite_37 . Performance convergence stops when the estimation of accuracy of the unlabeled pool converges so that the gradient of performance estimates is below a certain threshold. Uncertainty convergence stops when the uncertainty of the last selected instance of the batch has an uncertainty measure below a minimum threshold. For support vector machines and other learners that do not provide probabilities of classification, only uncertainty convergence can be used.
{ "cite_N": [ "@cite_37" ], "mid": [ "1971731086" ], "abstract": [ "Active learning is a proven method for reducing the cost of creating the training sets that are necessary for statistical NLP. However, there has been little work on stopping criteria for active learning. An operational stopping criterion is necessary to be able to use active learning in NLP applications. We investigate three different stopping criteria for active learning of named entity recognition (NER) and show that one of them, gradient-based stopping, (i) reliably stops active learning, (ii) achieves nearoptimal NER performance, (iii) and needs only about 20 as much training data as exhaustive labeling." ] }
1901.09118
2915019010
During active learning, an effective stopping method allows users to limit the number of annotations, which is cost effective. In this paper, a new stopping method called Predicted Change of F Measure will be introduced that attempts to provide the users an estimate of how much performance of the model is changing at each iteration. This stopping method can be applied with any base learner. This method is useful for reducing the data annotation bottleneck encountered when building text classification systems.
In 2010, Ghayoomi proposed a stopping criterion called the extended variance model @cite_16 . The extended variance model stops once the variance of the confidence of the unlabeled pool decreases by a minimum threshold over a certain number of iterations. However, in our experiments with text classification (see ), this stopping method does not stop.
{ "cite_N": [ "@cite_16" ], "mid": [ "1830218610" ], "abstract": [ "Active learning is a promising method to reduce human's effort for data annotation in different NLP applications. Since it is an iterative task, it should be stopped at some point which is optimum or near-optimum. In this paper we propose a novel stopping criterion for active learning of frame assignment based on the variability of the classifier's confidence score on the unlabeled data. The important advantage of this criterion is that we rely only on the unlabeled data to stop the data annotation process; as a result there are no requirements for the gold standard data and testing the classifier's performance in each iteration. Our experiments show that the proposed method achieves 93.67 of the classifier maximum performance." ] }
1901.09054
2913880062
Two things seem to be indisputable in the contemporary deep learning discourse: 1. The categorical cross-entropy loss after softmax activation is the method of choice for classification. 2. Training a CNN classifier from scratch on small datasets does not work well. In contrast to this, we show that the cosine loss function provides significantly better performance than cross-entropy on datasets with only a handful of samples per class. For example, the accuracy achieved on the CUB-200-2011 dataset without pre-training is by 30 higher than with the cross-entropy loss. Further experiments on four other popular datasets confirm our findings. Moreover, we show that the classification performance can be improved further by integrating prior knowledge in the form of class hierarchies, which is straightforward with the cosine loss.
The problem of learning from limited data has been approached from various directions. First and foremost, there is a huge body of work in the field of . In this area, it is often assumed to be given a set of classes with sufficient training data that is used to improve the performance on another set of classes with very few labeled examples. techniques are very common in this scenario. Such methods aim at learning highly discriminative features from a large dataset that generalize well to new classes @cite_16 @cite_22 @cite_4 @cite_11 @cite_13 , so that classification in face of limited training data can be performed with a simple nearest neighbor search. Another approach to few-shot learning is : training a learner on large datasets to learn from small ones .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2768228940", "2950537964", "2432717477", "", "2786817236" ], "abstract": [ "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.", "We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.", "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.", "", "Face recognition has achieved revolutionary advancement owing to the advancement of the deep convolutional neural network (CNN). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, traditional softmax loss of deep CNN usually lacks the power of discrimination. To address this problem, recently several loss functions such as central loss centerloss , large margin softmax loss lsoftmax , and angular softmax loss sphereface have been proposed. All these improvement algorithms share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we design a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as cosine loss by L2 normalizing both features and weight vectors to remove radial variation, based on which a cosine margin term is introduced to further maximize decision margin in angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. To test our approach, extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmark experiments, which confirms the effectiveness of our approach." ] }
1901.09054
2913880062
Two things seem to be indisputable in the contemporary deep learning discourse: 1. The categorical cross-entropy loss after softmax activation is the method of choice for classification. 2. Training a CNN classifier from scratch on small datasets does not work well. In contrast to this, we show that the cosine loss function provides significantly better performance than cross-entropy on datasets with only a handful of samples per class. For example, the accuracy achieved on the CUB-200-2011 dataset without pre-training is by 30 higher than with the cross-entropy loss. Further experiments on four other popular datasets confirm our findings. Moreover, we show that the classification performance can be improved further by integrating prior knowledge in the form of class hierarchies, which is straightforward with the cosine loss.
In contrast to all approaches mentioned above, our work focuses on learning from limited amounts of data without any external data or prior knowledge. This problem has recently also been tackled by incorporating a GAN for data augmentation into the learning process @cite_5 . As opposed to this, we approach the problem from the perspective of the loss function, which has not been explored extensively so far for direct fully-supervised classification.
{ "cite_N": [ "@cite_5" ], "mid": [ "2952305675" ], "abstract": [ "Deep learning has revolutionized the performance of classification, but meanwhile demands sufficient labeled data for training. Given insufficient data, while many techniques have been developed to help combat overfitting, the challenge remains if one tries to train deep networks, especially in the ill-posed extremely low data regimes: only a small set of labeled data are available, and nothing -- including unlabeled data -- else. Such regimes arise from practical situations where not only data labeling but also data collection itself is expensive. We propose a deep adversarial data augmentation (DADA) technique to address the problem, in which we elaborately formulate data augmentation as a problem of training a class-conditional and supervised generative adversarial network (GAN). Specifically, a new discriminator loss is proposed to fit the goal of data augmentation, through which both real and augmented samples are enforced to contribute to and be consistent in finding the decision boundaries. Tailored training techniques are developed accordingly. To quantitatively validate its effectiveness, we first perform extensive simulations to show that DADA substantially outperforms both traditional data augmentation and a few GAN-based options. We then extend experiments to three real-world small labeled datasets where existing data augmentation and or transfer learning strategies are either less effective or infeasible. All results endorse the superior capability of DADA in enhancing the generalization ability of deep networks trained in practical extremely low data regimes. Source code is available at this https URL." ] }
1901.08942
2949384672
We explore the use of a knowledge graphs, that capture general or commonsense knowledge, to augment the information extracted from images by the state-of-the-art methods for image captioning. The results of our experiments, on several benchmark data sets such as MS COCO, as measured by CIDEr-D, a performance metric for image captioning, show that the variants of the state-of-the-art methods for image captioning that make use of the information extracted from knowledge graphs can substantially outperform those that rely solely on the information extracted from images.
However, none of the existing methods take advantage of the readily available background knowledge about the world e.g., in the form of knowledge graphs . Such background knowledge has been shown to be useful in a broad range of applications ranging from information retrieval to question answering @cite_27 , including most recently, visual question answering (VQA) from images @cite_15 . We hypothesize that such background knowledge can address an important drawback of existing image captioning methods, by enriching captions with information that is not explicit in the image.
{ "cite_N": [ "@cite_27", "@cite_15" ], "mid": [ "2759136286", "2560920409" ], "abstract": [ "Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth.", "Much of the recent progress in Vision-to-Language problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we first propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We further show that the same mechanism can be used to incorporate external knowledge, which is critically important for answering high level visual questions. Specifically, we design a visual question answering model that combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. It particularly allows questions to be asked where the image alone does not contain the information required to select the appropriate answer. Our final model achieves the best reported results for both image captioning and visual question answering on several of the major benchmark datasets." ] }
1901.08942
2949384672
We explore the use of a knowledge graphs, that capture general or commonsense knowledge, to augment the information extracted from images by the state-of-the-art methods for image captioning. The results of our experiments, on several benchmark data sets such as MS COCO, as measured by CIDEr-D, a performance metric for image captioning, show that the variants of the state-of-the-art methods for image captioning that make use of the information extracted from knowledge graphs can substantially outperform those that rely solely on the information extracted from images.
Unlike the state-of-the-art image captioning systems, CNet-NIC is specifically designed to take advantage of background knowledge to augment the information extracted from the image (image features, objects) to improve machine-produced captions or image descriptions. Unlike VQA @cite_15 , which uses a knowledge graph to extract better image features and hence better answer questions about the image, CNet-NIC first detects objects (not just image features) in the image and uses the detected objects to identify related terms or concepts which are then used to produce better image captions.
{ "cite_N": [ "@cite_15" ], "mid": [ "2560920409" ], "abstract": [ "Much of the recent progress in Vision-to-Language problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we first propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We further show that the same mechanism can be used to incorporate external knowledge, which is critically important for answering high level visual questions. Specifically, we design a visual question answering model that combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. It particularly allows questions to be asked where the image alone does not contain the information required to select the appropriate answer. Our final model achieves the best reported results for both image captioning and visual question answering on several of the major benchmark datasets." ] }
1901.09165
2914528260
In this paper, we generally formulate the dynamics prediction problem of various network systems (e.g., the prediction of mobility, traffic and topology) as the temporal link prediction task. Different from conventional techniques of temporal link prediction that ignore the potential non-linear characteristics and the informative link weights in the dynamic network, we introduce a novel non-linear model GCN-GAN to tackle the challenging temporal link prediction task of weighted dynamic networks. The proposed model leverages the benefits of the graph convolutional network (GCN), long short-term memory (LSTM) as well as the generative adversarial network (GAN). Thus, the dynamics, topology structure and evolutionary patterns of weighted dynamic networks can be fully exploited to improve the temporal link prediction performance. Concretely, we first utilize GCN to explore the local topological characteristics of each single snapshot and then employ LSTM to characterize the evolving features of the dynamic networks. Moreover, GAN is used to enhance the ability of the model to generate the next weighted network snapshot, which can effectively tackle the sparsity and the wide-value-range problem of edge weights in real-life dynamic networks. To verify the model's effectiveness, we conduct extensive experiments on four datasets of different network systems and application scenarios. The experimental results demonstrate that our model achieves impressive results compared to the state-of-the-art competitors.
To avoid collapsing the temporal networks, authors of @cite_21 represented the dynamic network as a third-order tensor, and the temporal information was explored by conducting a tensor factorization process. In @cite_1 , a model based on the non-negative matrix factorization (NMF) framework @cite_19 was developed, where the dynamic information of historical snapshots was incorporated by utilizing the graph regularization technique. As discussed in @cite_22 , each network snapshot in the dynamic network could be described as a corresponding NMF component. A unified model was proposed based on the combination of multiple NMF components, where a novel adaptive parameter was introduced to consider the intrinsic correlation between single snapshot and the dynamic network.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_1", "@cite_22" ], "mid": [ "", "1864134408", "2777190022", "2886263965" ], "abstract": [ "", "The data in many disciplines such as social networks, Web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this article, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T + 1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T + 2, T + 3, etc.? In this article, we consider bipartite graphs that evolve over time and consider matrix- and tensor-based methods for predicting future links. We present a weight-based method for collapsing multiyear data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensor-based techniques are particularly effective for temporal data with varying periodic patterns.", "Abstract Many networks derived from society and nature are temporal and incomplete. The temporal link prediction problem in networks is to predict links at time T + 1 based on a given temporal network from time 1 to T , which is essential to important applications. The current algorithms either predict the temporal links by collapsing the dynamic networks or collapsing features derived from each network, which are criticized for ignoring the connection among slices. to overcome the issue, we propose a novel graph regularized nonnegative matrix factorization algorithm (GrNMF) for the temporal link prediction problem without collapsing the dynamic networks. To obtain the feature for each network from 1 to t , GrNMF factorizes the matrix associated with networks by setting the rest networks as regularization, which provides a better way to characterize the topological information of temporal links. Then, the GrNMF algorithm collapses the feature matrices to predict temporal links. Compared with state-of-the-art methods, the proposed algorithm exhibits significantly improved accuracy by avoiding the collapse of temporal networks. Experimental results of a number of artificial and real temporal networks illustrate that the proposed method is not only more accurate but also more robust than state-of-the-art approaches.", "The prediction of mobility, topology and traffic is an effective technique to improve the performance of various network systems, which can be generally represented as the temporal link prediction problem. In this paper, we propose a novel adaptive multiple non-negative matrix factorization (AM-NMF) method from the view of network embedding to cope with such problem. Under the framework of non-negative matrix factorization (NMF), the proposed method embeds the dynamic network into a low-dimensional hidden space, where the characteristics of different network snapshots are comprehensively preserved. Especially, our new method can effectively incorporate the hidden information of different time slices, because we introduce a novel adaptive parameter to automatically adjust the relative contribution of different terms in the uniform model. Accordingly, the prediction result of future network topology can be generated by conducting the inverse process of NMF form the shared hidden space. Moreover, we also derive the corresponding solving strategy whose convergence can be ensured. As an illustration, the new model will be applied to various network datasets such as human mobility networks, vehicle mobility networks, wireless mesh networks and data center networks. Experimental results show that our method outperforms some other state-of-the-art methods for the temporal link prediction of both unweighted and weighted networks." ] }
1901.09165
2914528260
In this paper, we generally formulate the dynamics prediction problem of various network systems (e.g., the prediction of mobility, traffic and topology) as the temporal link prediction task. Different from conventional techniques of temporal link prediction that ignore the potential non-linear characteristics and the informative link weights in the dynamic network, we introduce a novel non-linear model GCN-GAN to tackle the challenging temporal link prediction task of weighted dynamic networks. The proposed model leverages the benefits of the graph convolutional network (GCN), long short-term memory (LSTM) as well as the generative adversarial network (GAN). Thus, the dynamics, topology structure and evolutionary patterns of weighted dynamic networks can be fully exploited to improve the temporal link prediction performance. Concretely, we first utilize GCN to explore the local topological characteristics of each single snapshot and then employ LSTM to characterize the evolving features of the dynamic networks. Moreover, GAN is used to enhance the ability of the model to generate the next weighted network snapshot, which can effectively tackle the sparsity and the wide-value-range problem of edge weights in real-life dynamic networks. To verify the model's effectiveness, we conduct extensive experiments on four datasets of different network systems and application scenarios. The experimental results demonstrate that our model achieves impressive results compared to the state-of-the-art competitors.
However, the aforementioned approaches still have limited room for the improvement of prediction accuracy, because they are almost based on the traditional linear model, ignoring the potential non-linear characteristic of the dynamic network. Although several non-linear methods based on the restricted Boltzmann machine (RBM) @cite_27 and graph embedding @cite_4 are proposed, most of them can only be applied to the prediction of unweighted networks but cannot deal with the challenging case of weighted networks.
{ "cite_N": [ "@cite_27", "@cite_4" ], "mid": [ "2782823532", "2951508985" ], "abstract": [ "Time varying problems usually have complex underlying structures represented as dynamic networks where entities and relationships appear and disappear over time. The problem of efficiently performing dynamic link inference is extremely challenging due to the dynamic nature in massive evolving networks especially when there exist sparse connectivities and nonlinear transitional patterns. In this paper, we propose a novel deep learning framework, i.e., Conditional Temporal Restricted Boltzmann Machine (ctRBM), which predicts links based on individual transition variance as well as influence introduced by local neighbors. The proposed model is robust to noise and have the exponential capability to capture nonlinear variance. We tackle the computational challenges by developing an efficient algorithm for learning and inference of the proposed model. To improve the efficiency of the approach, we give a faster approximated implementation based on a proposed Neighbor Influence Clustering algorithm. Extensive experiments on simulated as well as real-world dynamic networks show that the proposed method outperforms existing algorithms in link inference on dynamic networks.", "We propose a simple discrete time semi-supervised graph embedding approach to link prediction in dynamic networks. The learned embedding reflects information from both the temporal and cross-sectional network structures, which is performed by defining the loss function as a weighted sum of the supervised loss from past dynamics and the unsupervised loss of predicting the neighborhood context in the current network. Our model is also capable of learning different embeddings for both formation and dissolution dynamics. These key aspects contributes to the predictive performance of our model and we provide experiments with three real--world dynamic networks showing that our method is comparable to state of the art methods in link formation prediction and outperforms state of the art baseline methods in link dissolution prediction." ] }
1901.08522
2969014073
We present an augmented reality human-swarm interface that combines two modalities of interaction: environment-oriented and robot-oriented. The environment-oriented modality allows the user to modify the environment (either virtual or physical) to indicate a goal to attain for the robot swarm. The robot-oriented modality makes it possible to select individual robots to reassign them to other tasks to increase performance or remedy failures. Previous research has concluded that environment-oriented interaction might prove more difficult to grasp for untrained users. In this paper, we report a user study which indicates that, at least in collective transport, environment-oriented interaction is more effective than purely robot-oriented interaction, and that the two combined achieve remarkable efficacy.
Robot-oriented interactions occur when a user must engage with individual robots, e.g., to make them into leaders other robots must follow @cite_0 , to hand-pick robots for a specific task @cite_14 @cite_10 @cite_4 @cite_6 @cite_15 , or to use a robot as tangible interface for gaming and education @cite_5 @cite_17 . The main advantage of these interfaces is the simplicity of their abstraction (the user becomes part of the swarm); however, with collective behaviors in which the user must interact with multiple robots, the downside of this approach is the large amount of information a user must provide to the robots (e.g., in the form of number of user commands per task).
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6", "@cite_0", "@cite_5", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "1854667150", "1481103847", "2513313098", "1590784139", "2533619018", "2562583785", "2074713499", "2577675456" ], "abstract": [ "The term human-swarm interaction (HSI) refers to the interaction between a human operator and a swarm of robots. In this paper, we investigate HSI in the context of a resource allocation and guidance scenario. We present a framework that enables direct communication between human beings and real robot swarms, without relying on a secondary display. We provide the user with a gesture-based interface that allows him to issue commands to the robots. In addition, we develop algorithms that allow robots receiving the commands to display appropriate feedback to the user. We evaluate our framework both in simulation and with real-world experiments. We conduct a summative usability study based on experiments in which participants must guide multiple subswarms to different task locations.", "A taxonomy for gesture-based interaction between a human and a group (swarm) of robots is described. Methods are classified into two categories. First, free-form interaction, where the robots are unconstrained in position and motion and the user can use deictic gestures to select subsets of robots and assign target goals and trajectories. Second, shape-constrained interaction, where the robots are in a configuration shape that can be modified by the user. In the later, the user controls a subset of meaningful degrees of freedom defining the overall shape instead of each robot directly. A multi-robot interactive display is described where a depth sensor is used to recognize human gesture, determining the commands sent to a group comprising tens of robots. Experimental results with a preliminary user study show the usability of the system.", "This paper studies how an operator with limited situational awareness can collaborate with a swarm of simulated robots. The robots are distributed in an environment with wall obstructions. They aggregate autonomously but are unable to form a single cluster due to the obstructions. The operator lacks the bird’s-eye perspective, but can interact with one robot at a time, and influence the behavior of other nearby robots. We conducted a series of experiments. They show that untrained participants had marginal influence on the performance of the swarm. Expert participants succeeded in aggregating 85 of the robots while untrained participants, with bird’s-eye view, succeeded in aggregating 90 . This demonstrates that the controls are sufficient for operators to aid the autonomous robots in the completion of the task and that lack of situational awareness is the main difficulty. An analysis of behavioral differences reveals that trained operators learned to gain superior situational awareness.", "This paper investigates how haptic interactions can be defined for enabling a single operator to control and interact with a team of mobile robots. Since there is no unique or canonical mapping from the swarm configuration to the forces experienced by the operator, a suitable mapping must be developed. To this end, multi-agent manipulability is proposed as a potentially useful mapping, whereby the forces experienced by the operator relate to how inputs, injected at precise locations in the team, translate to swarm-level motions. Small forces correspond to directions in which it is easy to move the swarm, while larger forces correspond to more costly directions. Initial experimental results support the viability of the proposed, haptic, human-swarm interaction mapping, through a user study where operators are tasked with driving a collection of robots through a series of way points.", "This paper introduces swarm user interfaces, a new class of human-computer interfaces comprised of many autonomous robots that handle both display and interaction. We describe the design of Zooids, an open-source open-hardware platform for developing tabletop swarm interfaces. The platform consists of a collection of custom-designed wheeled micro robots each 2.6 cm in diameter, a radio base-station, a high-speed DLP structured light projector for optical tracking, and a software framework for application development and control. We illustrate the potential of tabletop swarm user interfaces through a set of application scenarios developed with Zooids, and discuss general design considerations unique to swarm user interfaces.", "A complete prototype for multi-modal interaction between humans and multi-robot systems is described. The application focus is on search and rescue missions. From the human-side, speech and arm and hand gestures are combined to select, localize, and communicate task requests and spatial information to one or more robots in the field. From the robot side, LEDs and vocal messages are used to provide feedback to the human. The robots also employ coordinated autonomy to implement group behaviors for mixed initiative interaction. The system has been tested with different robotic platforms based on a number of different useful interaction patterns.", "This paper presents a machine vision based ap- proach for human operators to select individual and groups of autonomous robots from a swarm of UAVs. The angular distance between the robots and the human is estimated using measures of the detected human face, which aids to determine human and multi-UAV localization and positioning. In turn, this is exploited to effectively and naturally make the human select the spatially situated robots. Spatial gestures for selecting robots are presented by the human operator using tangible input devices (i.e., colored gloves). To select individuals and groups of robot we formulate a vocabulary of two-handed spatial pointing gestures. With the use of a Support Vector Machine (SVM) trained in a cascaded multi-binary-class configuration, the spatial gestures are effectively learned and recognized by a swarm of UAVs. I. INTRODUCTION Without the use of teleoperated and hand-held interaction devices, human operators generally face difficulties in select- ing and commanding individual and groups of robots from a relatively large group of spatially distributed robots (i.e., a swarm). However, due to the widespread availability of cost effective digital cameras onboard UGVs and UAVs, it is increasing the attention towards developing uninstrumented methods (i.e., methods that do not use sophisticated hardware devices from the human side) for human-swarm interaction (HSI). In previous work, we focused on learning efficient features incrementally (online) from multi-viewpoint images of multiple gestures that were acquired by a swarm of ground robots (1). In this paper, we present a cascaded supervised machine learning approach to deal with the machine vision problem of selecting 3D spatially-situated robots from a networked swarm based on the recognition of spatial hand gestures. These are a natural, easy recognizable, and device- less way to enable human operators to easily interact with external artifacts such as robots. Inspired by natural human behavior, we propose an ap- proach that combines face engagement and pointing gestures to interact with a swarm of robots: standing in front of a population of robots, by looking at them and pointing at them with spatial gestures, a human operator can designate individual or groups of robots of determined size. Robots cooperate to combine their independent observations of the human's face and gestures to cooperatively determine which robots were addressed (i.e., selected). While state of the art computer vision techniques pro- vide excellent face detection, human skeleton, and gesture recognition in ideal conditions, there are often occlusions,", "In this article, we present Cellulo, a novel robotic platform that investigates the intersection of three ideas for robotics in education: designing the robots to be versatile and generic tools; blending robots into the classroom by designing them to be pervasive objects and by creating tight interactions with (already pervasive) paper; and finally considering the practical constraints of real classrooms at every stage of the design. Our platform results from these considerations and builds on a unique combination of technologies: groups of handheld haptic-enabled robots, tablets and activity sheets printed on regular paper. The robots feature holonomic motion, haptic feedback capability and high accuracy localization through a microdot pattern overlaid on top of the activity sheets, while remaining affordable (robots cost about EUR 125 at the prototype stage) and classroom-friendly. We present the platform and report on our first interaction studies, involving about 230 children." ] }
1901.08522
2969014073
We present an augmented reality human-swarm interface that combines two modalities of interaction: environment-oriented and robot-oriented. The environment-oriented modality allows the user to modify the environment (either virtual or physical) to indicate a goal to attain for the robot swarm. The robot-oriented modality makes it possible to select individual robots to reassign them to other tasks to increase performance or remedy failures. Previous research has concluded that environment-oriented interaction might prove more difficult to grasp for untrained users. In this paper, we report a user study which indicates that, at least in collective transport, environment-oriented interaction is more effective than purely robot-oriented interaction, and that the two combined achieve remarkable efficacy.
At the opposite side of the spectrum, swarm-oriented interactions occur when a user treats the swarm as a unique entity. This modality of interaction has been demonstrated in navigation tasks, e.g., beacon-based @cite_16 , density-based @cite_7 , and waypoint-based @cite_8 . The main advantage of swarm-based interaction is that a small number of commands, e.g., the target position, is sufficient to control a large swarm. The price to pay, however, is the lack of fine-grained control on the robots. This makes it impossible to deal with suboptimal task assignment, individual failures, and error cascades.
{ "cite_N": [ "@cite_16", "@cite_7", "@cite_8" ], "mid": [ "2118389263", "1556924139", "1986523056" ], "abstract": [ "This study shows that appropriate human interaction can benefit a swarm of robots to achieve goals more efficiently. A set of desirable features for human swarm interaction is identified based on the principles of swarm robotics. Human swarm interaction architecture is then proposed that has all of the desirable features. A swarm simulation environment is created that allows simulating a swarm behavior in an indoor environment. The swarm behavior and the results of user interaction are studied by considering radiation source search and localization application of the swarm. Particle swarm optimization algorithm is slightly modified to enable the swarm to autonomously explore the indoor environment for radiation source search and localization. The emergence of intelligence is observed that enables the swarm to locate the radiation source completely on its own. Proposed human swarm interaction is then integrated in a simulation environment and user evaluation experiments are conducted. Participants are introduced to the interaction tool and asked to deploy the swarm to complete the missions. The performance comparison of the user guided swarm to that of the autonomous swarm shows that the interaction interface is fairly easy to learn and that user guided swarm is more efficient in achieving the goals. The results clearly indicate that the proposed interaction helped the swarm achieve emergence.", "This paper presents two approaches to externally influence a team of robots by means of time-varying density functions. These density functions represent rough references for where the robots should be located. Recently developed continuous-time algorithms move the robots so as to provide optimal coverage of a given the time-varying density functions. This makes it possible for a human operator to abstract away the number of robots and focus on the general behavior of the team of robots as a whole. Using a distributed approximation to this algorithm whereby the robots only need to access information from adjacent robots allows these algorithms to scale well with the number of robots. Simulations and robotic experiments show that the desired behaviors are achieved.", "We present a novel end-to-end solution for distributed multirobot coordination that translates multitouch gestures into low-level control inputs for teams of robots. Highlighting the need for a holistic solution to the problem of scalable human control of multirobot teams, we present a novel control algorithm with provable guarantees on the robots’ motion that lends itself well to input from modern tablet and smartphone interfaces. Concretely, we develop an iOS application in which the user is presented with a team of robots and a bounding box (prism). The user carefully translates and scales the prism in a virtual environment; these prism coordinates are wirelessly transferred to our server and then received as input to distributed onboard robot controllers. We develop a novel distributed multirobot control policy which provides guarantees on convergence to a goal with distance bounded linearly in the number of robots, and avoids interrobot collisions. This approach allows the human user to solve the cognitive tasks such as path planning, while leaving precise motion to the robots. Our system was tested in simulation and experiments, demonstrating its utility and effectiveness." ] }
1901.08522
2969014073
We present an augmented reality human-swarm interface that combines two modalities of interaction: environment-oriented and robot-oriented. The environment-oriented modality allows the user to modify the environment (either virtual or physical) to indicate a goal to attain for the robot swarm. The robot-oriented modality makes it possible to select individual robots to reassign them to other tasks to increase performance or remedy failures. Previous research has concluded that environment-oriented interaction might prove more difficult to grasp for untrained users. In this paper, we report a user study which indicates that, at least in collective transport, environment-oriented interaction is more effective than purely robot-oriented interaction, and that the two combined achieve remarkable efficacy.
Kolling @cite_13 performed a study that is central to the topic of this paper. They compared two modalities of controlling a swarm, namely robot-oriented and environment-oriented, in a task in which the robots had to diffuse in the environment while avoiding connectivity loss. The robots performed a simple form of foraging, and could be controlled either by direct commands, or by placing attractive beacons in the environment. The conclusions of this study are that environment-oriented interactions are not as effective as robot-oriented interactions, especially when environments are cluttered and many robots are involved.
{ "cite_N": [ "@cite_13" ], "mid": [ "2109664384" ], "abstract": [ "In this paper we present the first study of human-swarm interaction comparing two fundamental types of interaction, coined intermittent and environmental. These types are exemplified by two control methods, selection and beacon control, made available to a human operator to control a foraging swarm of robots. Selection and beacon control differ with respect to their temporal and spatial influence on the swarm and enable an operator to generate different strategies from the basic behaviors of the swarm. Selection control requires an active selection of groups of robots while beacon control exerts an influence on nearby robots within a set range. Both control methods are implemented in a testbed in which operators solve an information foraging problem by utilizing a set of swarm behaviors. The robotic swarm has only local communication and sensing capabilities. The number of robots in the swarm range from 50 to 200. Operator performance for each control method is compared in a series of missions in different environments with no obstacles up to cluttered and structured obstacles. In addition, performance is compared to simple and advanced autonomous swarms. Thirty-two participants were recruited for participation in the study. Autonomous swarm algorithms were tested in repeated simulations. Our results showed that selection control scales better to larger swarms and generally outperforms beacon control. Operators utilized different swarm behaviors with different frequency across control methods, suggesting an adaptation to different strategies induced by choice of control method. Simple autonomous swarms outperformed human operators in open environments, but operators adapted better to complex environments with obstacles. Human controlled swarms fell short of task-specific benchmarks under all conditions. Our results reinforce the importance of understanding and choosing appropriate types of human-swarm interaction when designing swarm systems, in addition to choosing appropriate swarm behaviors." ] }
1901.08544
2952763926
Space partitions of @math underlie a vast and important class of fast nearest neighbor search (NNS) algorithms. Inspired by recent theoretical work on NNS for general metric spaces [Andoni, Naor, Nikolov, Razenshteyn, Waingarten STOC 2018, FOCS 2018], we develop a new framework for building space partitions reducing the problem to followed by We instantiate this general approach with the KaHIP graph partitioner [Sanders, Schulz SEA 2013] and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS, our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods.
On the empirical side, currently the fastest indexing techniques for the NNS problem are @cite_30 . The high-level idea is to construct a graph on the dataset (it can be the @math -NN graph, but other constructions are also possible), and then for each query perform a walk, which eventually converges to the nearest neighbor. Although very fast, graph-based approaches have suboptimal locality of reference'', which makes them less suitable for several modern architectures. For instance, this is the case when the algorithm is run on a GPU @cite_28 or the data is stored in external memory @cite_9 .) This justifies further study of the partition-based methods.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_28" ], "mid": [ "2595294663", "642889137", "" ], "abstract": [ "The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : ℝk → ℝn. Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2 l2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.", "Nearest neighbor searches in high-dimensional space have many important applications in domains such as data mining, and multimedia databases. The problem is challenging due to the phenomenon called \"curse of dimensionality\". An alternative solution is to consider algorithms that returns a c-approximate nearest neighbor (c-ANN) with guaranteed probabilities. Locality Sensitive Hashing (LSH) is among the most widely adopted method, and it achieves high efficiency both in theory and practice. However, it is known to require an extremely high amount of space for indexing, hence limiting its scalability. In this paper, we propose several surprisingly simple methods to answer c-ANN queries with theoretical guarantees requiring only a single tiny index. Our methods are highly flexible and support a variety of functionalities, such as finding the exact nearest neighbor with any given probability. In the experiment, our methods demonstrate superior performance against the state-of-the-art LSH-based methods, and scale up well to 1 billion high-dimensional points on a single commodity PC.", "" ] }
1901.08560
2911822711
We introduce @math , an extreme case of semi-supervised learning with ultra-sparse categorisation where some classes have no labels in the training set. That is, in the training data some classes are sparsely labelled and other classes appear only as unlabelled data. Many real-world datasets are conceivably of this type. We demonstrate that effective learning in this regime is only possible when a model is capable of capturing both semi-supervised and unsupervised learning. We develop two deep generative models for classification in this regime that extend previous deep generative models designed for semi-supervised learning. By changing their probabilistic structure to contain a mixture of Gaussians in their continuous latent space, these new models can learn in both unsupervised and semi-unsupervised paradigms. We demonstrate their performance both for semi-unsupervised and unsupervised learning on various standard datasets. We show that our models can learn in an semi-unsupervised manner on Fashion-MNIST. Here we artificially mask out all labels for half of the classes of data and keep @math of labels for the remaining classes. Our model is able to learn effectively, obtaining a trained classifier with @math test set accuracy. We also can train on Fashion-MNIST unsupervised, obtaining @math test set accuracy. Additionally, doing the same for MNIST unsupervised we get @math test set accuracy, which is state-of-the art for fully probabilistic deep generative models.
For clustering, both VaDE @cite_4 and GM-VAE @cite_37 extend VAEs with some form of mixture model in their learnt, continuous latent space. VaDE has the same forward model as the first model we will propose, but it uses Bayes' rule to define its classifier variational posterior over labels, rather than having a separate network parameterising it. The GM-VAE has a mixture of Gaussians in one of its stochastic layers, where this mixture is conditioned on another stochastic variable. The Cluster-aware Generative Model (CaGeM) @cite_7 can, like our models, learn in both unsupervised and semi-supervised regimes. However, the model's performance at clustering data into components corresponding to ground-truth classes is not given.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_7" ], "mid": [ "2556467266", "2730106296", "2608412030" ], "abstract": [ "We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the known problem of over-regularisation that has been shown to arise in regular VAEs also manifests itself in our model and leads to cluster degeneracy. We show that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with our model. Furthermore we analyse the effect of this heuristic and provide an intuition of the various processes with the help of visualizations. Finally, we demonstrate the performance of our model on synthetic data, MNIST and SVHN, showing that the obtained clusters are distinct, interpretable and result in achieving competitive performance on unsupervised clustering to the state-of-the-art results.", "", "Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster-aware Generative Model, that uses unlabelled information to infer a latent representation that models the natural clustering of the data, and additional labelled data points to refine this clustering. The generative performances of the model significantly improve when labelled information is exploited, obtaining a log-likelihood of -79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods." ] }
1901.08649
2912432356
We present a novel method for learning a set of disentangled reward functions that sum to the original environment reward and are constrained to be independently obtainable. We define independent obtainability in terms of value functions with respect to obtaining one learned reward while pursuing another learned reward. Empirically, we illustrate that our method can learn meaningful reward decompositions in a variety of domains and that these decompositions exhibit some form of generalization performance when the environment's reward is modified. Theoretically, we derive results about the effect of maximizing our method's objective on the resulting reward functions and their corresponding optimal policies.
Some methods seek robust interpretable disentangled features @cite_3 @cite_2 @cite_0 @cite_4 . For example, does so by creating an information bottleneck'' @cite_7 that pressures the latent representation to be unit Gaussian. accomplishes a similar goal by maximizing the mutual information between components of their latent representation and other independent random variables. Methods such as these have been leveraged in RL to decompose the state of the environment. In particular, have applied @math -VAE to learn disentangled, modular'' representations of the environment state for use in many goals RL @cite_1 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_1", "@cite_3", "@cite_0", "@cite_2" ], "mid": [ "2737047298", "2964184826", "1594201624", "1541109270", "2810132790", "2951392118" ], "abstract": [ "The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.", "Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.", "", "", "Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled goal space. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.", "We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentanglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities." ] }
1901.08787
2950063140
In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. Our method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, they present a gating technique for eliminating of unlikely observation-to-track association. In the experiments, they evaluate the proposed method using two datasets, DukeMTMC and NLPR-MCT, which demonstrates that the proposed method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, they show that the proposed method can operate in real-time and online.
Single camera tracking (SCT), which tracks multiple targets in a single scene, is also called multi-object tracking (MOT). Many approaches have been proposed to improve the MOT. Track-by-detection, which optimizes a global objective function over many frames have emerged as a powerful MOT algorithm in recent years @cite_6 . Network flow-based methods are successful approaches in track-by-detection techniques @cite_43 @cite_3 @cite_37 . These methods efficiently optimize their objective function using the push-relabel method @cite_29 and successive shortest path algorithms @cite_43 @cite_37 . However, the pairwise terms in network flow formulation are restrictive in representing higher-order motion models, e.g., linear motion model and constant velocity model @cite_13 . In contrast, formalizing multi-object tracking with multidimensional assignment (MDA) problem produces more general representations of computed trajectories since MDA can exploit the higher-order information @cite_13 @cite_18 . Solutions for MDA are MHT @cite_15 @cite_18 @cite_36 and Markov Chain Monte Carlo (MCMC) data association @cite_28 . While MCMC data association exploits the stochastic method, MHT searches the solution space deterministic way.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_36", "@cite_28", "@cite_29", "@cite_3", "@cite_6", "@cite_43", "@cite_15", "@cite_13" ], "mid": [ "", "2237765446", "", "2127021804", "2111644456", "", "2016135469", "", "2100548006", "2115734113" ], "abstract": [ "", "This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge.", "", "This paper presents Markov chain Monte Carlo data association (MCMCDA) for solving data association problems arising in multitarget tracking in a cluttered environment. When the number of targets is fixed, the single-scan version of MCMCDA approximates joint probabilistic data association (JPDA). Although the exact computation of association probabilities in JPDA is NP-hard, we prove that the single-scan MCMCDA algorithm provides a fully polynomial randomized approximation scheme for JPDA. For general multitarget tracking problems, in which unknown numbers of targets appear and disappear at random times, we present a multi-scan MCMCDA algorithm that approximates the optimal Bayesian filter. We also present extensive simulation studies supporting theoretical results in this paper. Our simulation results also show that MCMCDA outperforms multiple hypothesis tracking (MHT) by a significant margin in terms of accuracy and efficiency under extreme conditions, such as a large number of targets in a dense environment, low detection probabilities, and high false alarm rates.", "We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement.", "", "We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.", "", "An efficient implementation of Reid's multiple hypothesis tracking (MHT) algorithm is presented in which the k-best hypotheses are determined in polynomial time using an algorithm due to Murly (1968). The MHT algorithm is then applied to several motion sequences. The MHT capabilities of track initiation, termination, and continuation are demonstrated together with the latter's capability to provide low level support of temporary occlusion of tracks. Between 50 and 150 corner features are simultaneously tracked in the image plane over a sequence of up to 51 frames. Each corner is tracked using a simple linear Kalman filter and any data association uncertainty is resolved by the MHT. Kalman filter parameter estimation is discussed, and experimental results show that the algorithm is robust to errors in the motion model. An investigation of the performance of the algorithm as a function of look-ahead (tree depth) indicates that high accuracy can be obtained for tree depths as shallow as three. Experimental results suggest that a real-time MHT solution to the motion correspondence problem is possible for certain classes of scenes.", "We present an iterative approximate solution to the multidimensional assignment problem under general cost functions. The method maintains a feasible solution at every step, and is guaranteed to converge. It is similar to the iterated conditional modes (ICM) algorithm, but applied at each step to a block of variables representing correspondences between two adjacent frames, with the optimal conditional mode being calculated exactly as the solution to a two-frame linear assignment problem. Experiments with ground-truthed trajectory data show that the method outperforms both network-flow data association and greedy recursive filtering using a constant velocity motion model." ] }
1901.08787
2950063140
In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. Our method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, they present a gating technique for eliminating of unlikely observation-to-track association. In the experiments, they evaluate the proposed method using two datasets, DukeMTMC and NLPR-MCT, which demonstrates that the proposed method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, they show that the proposed method can operate in real-time and online.
Multiple Hypothesis Tracking(MHT) was first presented in @cite_23 and is regarded as one of the earliest successful algorithm for visual tracking. MHT maintains all track hypotheses by building track-hypothesis trees whose branch represent a possible data association result(a track hypothesis). The probability of a track hypothesis is computed by evaluating the quality of data association result the branch had. An ambiguity of data association which occurs due to either short occlusion or missed detection does not usually matter for MHT since the best hypothesis is computed with higher-order data association information and entire track hypotheses. In this paper, we applied MHT to solve the multi-camera tracking problem.
{ "cite_N": [ "@cite_23" ], "mid": [ "2127923214" ], "abstract": [ "An algorithm for tracking multiple targets in a cluttered enviroment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. This branching technique allows correlation of a measurement with its source based on subsequent, as well as previous, data. To keep the number of hypotheses reasonable, unlikely hypotheses are eliminated and hypotheses with similar target estimates are combined. To minimize computational requirements, the entire set of targets and measurements is divided into clusters that are solved independently. In an illustrative example of aircraft tracking, the algorithm successfully tracks targets over a wide range of conditions." ] }
1901.08787
2950063140
In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. Our method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, they present a gating technique for eliminating of unlikely observation-to-track association. In the experiments, they evaluate the proposed method using two datasets, DukeMTMC and NLPR-MCT, which demonstrates that the proposed method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, they show that the proposed method can operate in real-time and online.
Multi-camera tracking aims to establish target correspondences among observations obtained from multiple cameras so as to achieve consistent target labelling across all cameras in the camera network @cite_35 . Earlier research works in MCT only try to address tracking targets across cameras, assuming solved SCT. However, researchers have argued recently that assumptions of availability of intra-camera tracks are unrealistic @cite_39 . Therefore, solving MCT problem by simultaneously treating problem of SCT seems to address more realistic problem. Y.T Tesfaye al @cite_35 proposed a constrained dominant set clustering (CDSC) based framework that utilizes a three layers hierarchical approach, where SCT problem is solved using first two layers, and later in the third layer MCT problem is solved by merging tracks of the same person across different cameras. In this paper, we also solve the problem of across camera data association(MCT) by the proposed MHT while SCT is simultaneously treated by real-time multi-object tracker such as @cite_20 @cite_1 .
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_20", "@cite_39" ], "mid": [ "2640181096", "2573106815", "2398802088", "2090322660" ], "abstract": [ "In this paper, a unified three-layer hierarchical approach for solving tracking problems in multiple non-overlapping cameras is proposed. Given a video and a set of detections (obtained by any person detector), we first solve within-camera tracking employing the first two layers of our framework and, then, in the third layer, we solve across-camera tracking by merging tracks of the same person in all cameras in a simultaneous fashion. To best serve our purpose, a constrained dominant sets clustering (CDSC) technique, a parametrized version of standard quadratic optimization, is employed to solve both tracking tasks. The tracking problem is caste as finding constrained dominant sets from a graph. In addition to having a unified framework that simultaneously solves within- and across-camera tracking, the third layer helps link broken tracks of the same person occurring during within-camera tracking. In this work, we propose a fast algorithm, based on dynamics from evolutionary game theory, which is efficient and salable to large-scale real-world applications.", "This paper presents an online multiple object tracking (MOT) method based on tracking by detection. Tracking by detection has the inherent problems by false and miss detection. To deal with the false detection, we employed the Gaussian mixture probability hypothesis density (GM-PHD) filter because this filter is robust to noisy and random data processing containing many false observations. Thus, we revised the GM-PHD filter for visual MOT. Also, to handle miss detection, we propose a hierarchical tracking framework to associate fragmented or ID switched tracklets. Experiments with the representative dataset PETS 2009 S2L1 show that our framework are effective to decrease the errors by false and miss detection, and real-time capability.", "We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary integer program. We propose a tractable, approximate, online solution through the combination of a multi-stage cascade and a sliding temporal window. Our experiments demonstrate significant accuracy improvement over the state of the art and real-time post-detection performance.", "We present a distributed system for wide-area multi-object tracking across disjoint camera views. Every camera in the system performs multi-object tracking, and keeps its own trackers and trajectories. The data from multiple features are exchanged between adjacent cameras for object matching. We employ a probabilistic Petri Net-based approach to account for the uncertainties of the vision algorithms (such as unreliable background subtraction, and tracking failure) and to incorporate the available domain knowledge. We combine appearance features of objects as well as the travel-time evidence for target matching and consistent labeling across disjoint camera views. 3D color histogram, histogram of oriented gradients, local binary patterns, object size and aspect ratio are used as the appearance features. The distribution of the travel time is modeled by a Gaussian mixture model. Multiple features are combined by the weights, which are assigned based on the reliability of the features. By incorporating the domain knowledge about the camera configurations and the information about the received packets from other cameras, certain transitions are fired in the probabilistic Petri net. The system is trained to learn different parameters of the matching process, and updated online. We first present wide-area tracking of vehicles, where we used three non-overlapping cameras. The first and the third cameras are approximately 150 m apart from each other with two intersections in the blind region. We also present an example of applying our method to a people-tracking scenario. The results show the success of the proposed method. A comparison between our work and related work is also presented." ] }
1901.08573
2913266441
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of 2,000 submissions, surpassing the runner-up approach by @math in terms of mean @math perturbation distance.
Compared with attack methods, adversarial defense methods are relatively fewer. Robust optimization based defenses are inspired by the above-mentioned attacks. Intuitively, the methods train a network by fitting its parameters to the adversarial examples: Following this framework, @cite_20 @cite_43 considered one-step adversaries, while @cite_55 worked with multi-step methods for the inner maximization problem. There are, however, two critical differences between the robust optimization based defenses and the present paper. Firstly, robust optimization based defenses lack of theoretical guarantees. Secondly, such methods do not consider the trade-off between accuracy and robustness.
{ "cite_N": [ "@cite_43", "@cite_55", "@cite_20" ], "mid": [ "2269778407", "2964253222", "2230740169" ], "abstract": [ "Abstract We show that adversarial training of supervised learning models is in fact a robust optimization procedure. To do this, we establish a general framework for increasing local stability of supervised learning models using robust optimization. The framework is general and broadly applicable to differentiable non-parametric models, e.g., Artificial Neural Networks (ANNs). Using an alternating minimization-maximization procedure, the loss of the model is minimized with respect to perturbed examples that are generated at each parameter update, rather than with respect to the original training data. Our proposed framework generalizes adversarial training, as well as previous approaches for increasing local stability of ANNs. Experimental results reveal that our approach increases the robustness of the network to existing adversarial examples, while making it harder to generate new ones. Furthermore, our algorithm improves the accuracy of the networks also on the original test data.", "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.", "The robustness of neural networks to intended perturbations has recently attracted significant attention. In this paper, we propose a new method, , that learns robust classifiers from supervised data. The proposed method takes finding adversarial examples as an intermediate step. A new and simple way of finding adversarial examples is presented and experimentally shown to be efficient. Experimental results demonstrate that resulting learning method greatly improves the robustness of the classification models produced." ] }
1901.08573
2913266441
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of 2,000 submissions, surpassing the runner-up approach by @math in terms of mean @math perturbation distance.
We mention another related line of research in adversarial defenses---relaxation based defenses. Given that the inner maximization in problem might be hard to solve due to the non-convexity nature of deep neural networks, @cite_30 and @cite_13 considered a convex outer approximation of the set of activations reachable through a norm-bounded perturbation for one-hidden-layer neural networks. @cite_2 later scaled the methods to larger models, and @cite_33 proposed a tighter convex approximation. @cite_14 @cite_58 considered a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball. These approaches, however, do not apply when the activation function is ReLU.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_33", "@cite_2", "@cite_58", "@cite_13" ], "mid": [ "2963496101", "2963539647", "2892354372", "2962943487", "2962935454", "2963626025" ], "abstract": [ "", "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.", "Research on adversarial examples are evolved in arms race between defenders who attempt to train robust networks and attackers who try to prove them wrong. This has spurred interest in methods for certifying the robustness of a network. Methods based on combinatorial optimization compute the true robustness but do not yet scale. Methods based on convex relaxations scale better but can only yield non-vacuous bounds on networks trained with those relaxations. In this paper, we propose a new semidefinite relaxation that applies to ReLU networks with any number of layers. We show that it produces meaningful robustness guarantees across a spectrum of networks that were trained against other objectives, something previous convex relaxations are not able to achieve.", "Recent work has developed methods for learning deep network classifiers that are robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three main directly. First, we present a technique for extending these training procedures to much more general networks, with skip connections (such as ResNets) and general nonlinearities; the approach is fully modular, and can be implemented automatically analogously to automatic differentiation. Second, in the specific case of l∞ adversarial perturbations and networks with ReLU nonlinearities, we adopt a nonlinear random projection for training, which scales in the number of hidden units (previous approached scaled quadratically). Third, we show how to further improve robust error through cascade models. On both MNIST and CIFAR data sets, we train classifiers that improve substantially on the state of the art in provable robust adversarial error bounds: from 5.8 to 3.1 on MNIST (with l∞ perturbations of ϵ=0.1), and from 80 to 36.4 on CIFAR (with l∞ perturbations of ϵ=2 255).", "We are concerned with learning models that generalize well to different unseen domains. We consider a worst-case formulation over data distributions that are near the source domain in the feature space. Only using training data from the source domain, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is \"hard\" under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration. For softmax losses, we show that our method is a data-dependent regularization scheme that behaves differently from classical regularizers (e.g., ridge or lasso) that regularize towards zero. On digit recognition and semantic segmentation tasks, we empirically observe that our method learns models that improve performance across a priori unknown data distributions.", "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no that perturbs each pixel by at most @math can cause more than @math test error." ] }
1901.08573
2913266441
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of 2,000 submissions, surpassing the runner-up approach by @math in terms of mean @math perturbation distance.
Despite a large amount of empirical works on adversarial defenses, many fundamental questions remain open in theory. There are a few preliminary explorations in recent years. @cite_47 derived upper bounds on the robustness to perturbations of any classification function, under the assumption that the data is generated with a smooth generative model. From computational aspects, @cite_42 @cite_11 showed that adversarial examples in machine learning are likely not due to information-theoretic limitations, but rather it could be due to computational hardness. From statistical aspects, @cite_57 showed that the sample complexity of robust training can be significantly larger than that of standard training. This gap holds irrespective of the training algorithm or the model family. @cite_16 and @cite_31 studied the uniform convergence of robust error @math by extending the classic VC and Rademacher arguments to the case of adversarial learning, respectively. A recent work demonstrates the existence of trade-off between accuracy and robustness @cite_59 . However, the work did not provide any methodology about how to tackle the trade-off.
{ "cite_N": [ "@cite_31", "@cite_42", "@cite_57", "@cite_59", "@cite_47", "@cite_16", "@cite_11" ], "mid": [ "2898193427", "2803732607", "2947294642", "2964116600", "2963849784", "2892179671", "2900946294" ], "abstract": [ "Many machine learning models are vulnerable to adversarial attacks; for example, adding adversarial perturbations that are imperceptible to humans can often make machine learning models produce wrong predictions with high confidence. Moreover, although we may obtain robust models on the training dataset via adversarial training, in some problems the learned models cannot generalize well to the test data. In this paper, we focus on @math attacks, and study the adversarially robust generalization problem through the lens of Rademacher complexity. For binary linear classifiers, we prove tight bounds for the adversarial Rademacher complexity, and show that the adversarial Rademacher complexity is never smaller than its natural counterpart, and it has an unavoidable dimension dependence, unless the weight vector has bounded @math norm. The results also extend to multi-class linear classifiers. For (nonlinear) neural networks, we show that the dimension dependence in the adversarial Rademacher complexity also exists. We further consider a surrogate adversarial loss for one-hidden layer ReLU network and prove margin bounds for this setting. Our results indicate that having @math norm constraints on the weight matrices might be a potential way to improve generalization in the adversarial setting. We demonstrate experimental results that validate our theoretical findings.", "Why are classifiers in high dimension vulnerable to \"adversarial\" perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints. First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give a particular classification task where learning a robust classifier is computationally intractable. More precisely we construct a binary classification task in high dimensional space which is (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (non-robustly) by a simple linear separator, (iii) yet is not efficiently robustly learnable, even for small perturbations, by any algorithm in the statistical query (SQ) model. This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.", "Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is required to achieve adversarially robust generalization. In this paper, we theoretically and empirically show that with just more unlabeled data, we can learn a model with better adversarially robust generalization. The key insight of our results is based on a risk decomposition theorem, in which the expected robust risk is separated into two parts: the stability part which measures the prediction stability in the presence of perturbations, and the accuracy part which evaluates the standard classification accuracy. As the stability part does not depend on any label information, we can optimize this part using unlabeled data. We further prove that for a specific Gaussian mixture problem illustrated by schmidt2018adversarially , adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Inspired by the theoretical findings, we propose a new algorithm called PASS by leveraging unlabeled data during adversarial training. We show that in the transductive and semi-supervised settings, PASS achieves higher robust accuracy and defense success rate on the Cifar-10 task.", "", "Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.", "The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. These attacks can be carried out by adding imperceptible perturbations to inputs to generate adversarial examples and finding effective defenses and detectors has proven to be difficult. In this paper, we step away from the attack-defense arms race and seek to understand the limits of what can be learned in the presence of a test-time adversary. In particular, we extend the Probably Approximately Correct (PAC)-learning framework to account for the presence of an adversary. We first define corrupted hypothesis classes which arise from standard binary hypothesis classes in the presence of an evasion adversary and derive the Vapnik-Chervonenkis (VC)-dimension for these, denoted as the Adversarial VC-dimension. We then show that a corresponding Fundamental Theorem of Statistical learning can be proved for evasion adversaries, where the sample complexity is controlled by the Adversarial VC-dimension. We then explicitly derive the Adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimensiont, closing an open question. Finally, we prove that the Adversarial VC-dimension can be either larger or smaller than the standard VC-dimension depending on the hypothesis class and adversary, making it an interesting object of study in its own right.", "In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem. More precisely, we constructed a binary classification task for which (i) a robust classifier exists; yet no non-trivial accuracy can be obtained with an efficient algorithm in (ii) the statistical query model. In the present paper we significantly strengthen both (i) and (ii): we now construct a task which admits (i') a maximally robust classifier (that is it can tolerate perturbations of size comparable to the size of the examples themselves); and moreover we prove computational hardness of learning this task under (ii') a standard cryptographic assumption." ] }
1901.08235
2913894455
We propose a novel formulation for phase synchronization -- the statistical problem of jointly estimating alignment angles from noisy pairwise comparisons -- as a nonconvex optimization problem that enforces consistency among the pairwise comparisons in multiple frequency channels. Inspired by harmonic retrieval in signal processing, we develop a simple yet efficient two-stage algorithm that leverages the multi-frequency information. We demonstrate in theory and practice that the proposed algorithm significantly outperforms state-of-the-art phase synchronization algorithms, at a mild computational costs incurred by using the extra frequency channels. We also extend our algorithmic framework to general synchronization problems over compact Lie groups.
Directly solving is NP-hard @cite_19 , but many convex and nonconvex methods have been proposed to find high quality approximate solutions. These include spectral and semi-definite programming (SDP) relaxations @cite_20 @cite_0 @cite_1 @cite_21 @cite_27 . An alternative approach using (GPM) is also studied @cite_24 @cite_10 @cite_29 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_1", "@cite_0", "@cite_19", "@cite_27", "@cite_24", "@cite_10", "@cite_20" ], "mid": [ "2598300585", "2100685794", "2963381078", "", "2007104311", "2161367205", "2259617270", "2962970675", "2143703915" ], "abstract": [ "The problem of estimating the phases (angles) of a complex unit-modulus vector @math from their noisy pairwise relative measurements @math , where @math is a complex-valued Gaussian random matrix, is known as phase synchronization. The maximum likelihood estimator (MLE) is a solution to a unit--modulus-constrained quadratic programming problem, which is nonconvex. Existing works have proposed polynomial-time algorithms such as a semidefinite programming (SDP) relaxation or the generalized power method (GPM). Numerical experiments suggest that both of these methods succeed with high probability for @math up to @math , yet existing analyses only confirm this observation for @math up to @math . In this paper, we bridge the gap by proving that the SDP relaxation is tight for @math , and GPM converges to the global optimum under the same regime. Moreover, we establish a linear convergence rate for GPM, and derive a tight...", "The little Grothendieck problem consists of maximizing @math źijCijxixj for a positive semidefinite matrix C, over binary variables @math xiź ?1 . In this paper we focus on a natural generalization of this problem, the little Grothendieck problem over the orthogonal group. Given @math CźRdn?dn a positive semidefinite matrix, the objective is to maximize @math źijtrCijTOiOjT restricting @math Oi to take values in the group of orthogonal matrices @math Od, where @math Cij denotes the (ij)-th @math d?d block of C. We propose an approximation algorithm, which we refer to as Orthogonal-Cut, to solve the little Grothendieck problem over the group of orthogonal matrices @math Od and show a constant approximation ratio. Our method is based on semidefinite programming. For a given @math dź1, we show a constant approximation ratio of @math źR(d)2, where @math źR(d) is the expected average singular value of a @math d?d matrix with random Gaussian @math N0,1d i.i.d. entries. For @math d=1 we recover the known @math źR(1)2=2 ź approximation guarantee for the classical little Grothendieck problem. Our algorithm and analysis naturally extends to the complex valued case also providing a constant approximation ratio for the analogous little Grothendieck problem over the Unitary Group @math Ud. Orthogonal-Cut also serves as an approximation algorithm for several applications, including the Procrustes problem where it improves over the best previously known approximation ratio of @math 122. The little Grothendieck problem falls under the larger class of problems approximated by a recent algorithm proposed in the context of the non-commutative Grothendieck inequality. Nonetheless, our approach is simpler and provides better approximation with matching integrality gaps. Finally, we also provide an improved approximation algorithm for the more general little Grothendieck problem over the orthogonal (or unitary) group with rank constraints, recovering, when @math d=1, the sharp, known ratios.", "Consider @math points in @math and @math local coordinate systems that are related through unknown rigid transforms. For each point, we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the @math points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, although nonconvex, has a well-known closed-form solution when M=2 (based on the singular value decomposition (SVD)). However, no closed-form solution is known for @math . In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely, a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results fr...", "", "In this paper we study the approximation algorithms for a class of discrete quadratic optimization problems in the Hermitian complex form. A special case of the problem that we study corresponds to the max-3-cut model used in a recent paper of Goemans and Williamson J. Comput. System Sci., 68 (2004), pp. 442-470]. We first develop a closed-form formula to compute the probability of a complex-valued normally distributed bivariate random vector to be in a given angular region. This formula allows us to compute the expected value of a randomized (with a specific rounding rule) solution based on the optimal solution of the complex semidefinite programming relaxation problem. In particular, we present an @math -approximation algorithm, and then study the limit of that model, in which the problem remains NP-hard. We show that if the objective is to maximize a positive semidefinite Hermitian form, then the randomization-rounding procedure guarantees a worst-case performance ratio of @math , which is better than the ratio of @math for its counterpart in the real case due to Nesterov. Furthermore, if the objective matrix is real-valued positive semidefinite with nonpositive off-diagonal elements, then the performance ratio improves to 0.9349.", "Maximum likelihood estimation problems are, in general, intractable optimization problems. As a result, it is common to approximate the maximum likelihood estimator (MLE) using convex relaxations. In some cases, the relaxation is tight: it recovers the true MLE. Most tightness proofs only apply to situations where the MLE exactly recovers a planted solution (known to the analyst). It is then sufficient to establish that the optimality conditions hold at the planted signal. In this paper, we study an estimation problem (angular synchronization) for which the MLE is not a simple function of the planted solution, yet for which the convex relaxation is tight. To establish tightness in this context, the proof is less direct because the point at which to verify optimality conditions is not known explicitly. Angular synchronization consists in estimating a collection of n phases, given noisy measurements of the pairwise relative phases. The MLE for angular synchronization is the solution of a (hard) non-bipartite Grothendieck problem over the complex numbers. We consider a stochastic model for the data: a planted signal (that is, a ground truth set of phases) is corrupted with non-adversarial random noise. Even though the MLE does not coincide with the planted signal, we show that the classical semidefinite relaxation for it is tight, with high probability. This holds even for high levels of noise.", "We estimate @math phases (angles) from noisy pairwise relative phase measurements. The task is modeled as a nonconvex least-squares optimization problem. It was recently shown that this problem can be solved in polynomial time via convex relaxation, under some conditions on the noise. In this paper, under similar but more restrictive conditions, we show that a modified version of the power method converges to the global optimum. This is simpler and (empirically) faster than convex approaches. Empirically, they both succeed in the same regime. Further analysis shows that, in the same noise regime as previously studied, second-order necessary optimality conditions for this quadratically constrained quadratic program are also sufficient, despite nonconvexity.", "An estimation problem of fundamental interest is that of phase (or angular) synchronization, in which the goal is to recover a collection of phases (or angles) using noisy measurements of relative phases (or angle offsets). It is known that in the Gaussian noise setting, the maximum likelihood estimator (MLE) is an optimal solution to a nonconvex quadratic optimization problem and can be found with high probability using semidefinite programming (SDP), provided that the noise power is not too large. In this paper, we study the estimation and convergence performance of a recently proposed low-complexity alternative to the SDP-based approach, namely, the generalized power method (GPM). Our contribution is twofold. First, we show that the sequence of estimation errors associated with the GPM iterates is bounded above by a decreasing sequence. As a corollary, we show that all iterates achieve an estimation error that is on the same order as that of an MLE. Our result holds under the least restrictive assumpti...", "The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,…,θn from m noisy measurements of their offsets θi−θjmod2π. Of particular interest is angle recovery in the presence of many outlier measurements that are uniformly distributed in [0,2π) and carry no information on the true offsets. We introduce an efficient recovery algorithm for the unknown angles from the top eigenvector of a specially designed Hermitian matrix. The eigenvector method is extremely stable and succeeds even when the number of outliers is exceedingly large. For example, we successfully estimate n=400 angles from a full set of m=(4002) offset measurements of which 90 are outliers in less than a second on a commercial laptop. The performance of the method is analyzed using random matrix theory and information theory. We discuss the relation of the synchronization problem to the combinatorial optimization problem Max-2-Lin mod L and present a semidefinite relaxation for angle recovery, drawing similarities with the Goemans–Williamson algorithm for finding the maximum cut in a weighted graph. We present extensions of the eigenvector method to other synchronization problems that involve different group structures and their applications, such as the time synchronization problem in distributed networks and the surface reconstruction problems in computer vision and optics." ] }
1901.08235
2913894455
We propose a novel formulation for phase synchronization -- the statistical problem of jointly estimating alignment angles from noisy pairwise comparisons -- as a nonconvex optimization problem that enforces consistency among the pairwise comparisons in multiple frequency channels. Inspired by harmonic retrieval in signal processing, we develop a simple yet efficient two-stage algorithm that leverages the multi-frequency information. We demonstrate in theory and practice that the proposed algorithm significantly outperforms state-of-the-art phase synchronization algorithms, at a mild computational costs incurred by using the extra frequency channels. We also extend our algorithmic framework to general synchronization problems over compact Lie groups.
@cite_6 proposed the (NUG) SDP optimization framework for synchronization over compact Lie groups. The SDP is based on quadratically lifting the irreducible representations of the group elements, and imposing consistency among variables across frequency channels via a F 'e jer kernel; it is computationally expensive. @cite_11 introduced an iterative approximate message passing (AMP) algorithm for noise model , assuming the noise are Gaussian and independent across frequency channels. Each iteration of the AMP performs matrix-vector multiplication and entrywise nonlinear transformation, followed by an extra Onsager correction term; it is conjectured to be asymptotically optimal.
{ "cite_N": [ "@cite_6", "@cite_11" ], "mid": [ "2258798086", "2536490006" ], "abstract": [ "Let G be a compact group and let fij 2 L 2 (G). We dene the Non-Unique Games (NUG) problem as nding", "Various alignment problems arising in cryo-electron microscopy, community detection, time synchronization, computer vision, and other fields fall into a common framework of synchronization problems over compact groups such as Z L, U(1), or SO(3). The goal of such problems is to estimate an unknown vector of group elements given noisy relative observations. We present an efficient iterative algorithm to solve a large class of these problems, allowing for any compact group, with measurements on multiple 'frequency channels' (Fourier modes, or more generally, irreducible representations of the group). Our algorithm is a highly efficient iterative method following the blueprint of approximate message passing (AMP), which has recently arisen as a central technique for inference problems such as structured low-rank estimation and compressed sensing. We augment the standard ideas of AMP with ideas from representation theory so that the algorithm can work with distributions over compact groups. Using standard but non-rigorous methods from statistical physics we analyze the behavior of our algorithm on a Gaussian noise model, identifying phases where the problem is easy, (computationally) hard, and (statistically) impossible. In particular, such evidence predicts that our algorithm is information-theoretically optimal in many cases, and that the remaining cases show evidence of statistical-to-computational gaps." ] }
1901.08422
2914319805
This work addresses the challenges related to attacks on collaborative tagging systems, which often comes in a form of malicious annotations or profile injection attacks. In particular, we study various countermeasures against two types of such attacks for social tagging systems, the Overload attack and the Piggyback attack. The countermeasure schemes studied here include baseline classifiers such as, Naive Bayes filter and Support Vector Machine, as well as a Deep Learning approach. Our evaluation performed over synthetic spam data generated from del.icio.us dataset, shows that in most cases, Deep Learning can outperform the classical solutions, providing high-level protection against threats.
The issue of security in tag-based RS in not new, but it has so far been mainly approached by solutions associated with spam detection. As per @cite_8 , anti-spam approaches in social networks are divided into 3 main categories: a) , such as CAPTCHAs , b) approaches which demote spam in search queries, and c) solutions which aim to detect and isolate any potentially threatening entities, such as a user or a resource. The focus of our work is on the latter type only.
{ "cite_N": [ "@cite_8" ], "mid": [ "2127124926" ], "abstract": [ "In recent years, social Web sites have become important components of the Web. With their success, however, has come a growing influx of spam. If left unchecked, spam threatens to undermine resource sharing, interactivity, and openness. This article surveys three categories of potential countermeasures - those based on detection, demotion, and prevention. Although many of these countermeasures have been proposed before for email and Web spam, the authors find that their applicability to social Web sites differs." ] }
1901.08455
2913465187
Pre-training of models in pruning algorithms plays an important role in pruning decision-making. We find that excessive pre-training is not necessary for pruning algorithms. According to this idea, we propose a pruning algorithm---Incremental pruning based on less training (IPLT). Compared with the traditional pruning algorithm based on a large number of pre-training, IPLT has competitive compression effect than the traditional pruning algorithm under the same simple pruning strategy. On the premise of ensuring accuracy, IPLT can achieve 8x-9x compression for VGG-19 on CIFAR-10 and only needs to pre-train few epochs. For VGG-19 on CIFAR-10, we can not only achieve 10 times test acceleration, but also about 10 times training acceleration. At present, the research mainly focuses on the compression and acceleration in the application stage of the model, while the compression and acceleration in the training stage are few. We newly proposed a pruning algorithm that can compress and accelerate in the training stage. It is novel to consider the amount of pre-training required by pruning algorithm. Our results have implications: Too much pre-training may be not necessary for pruning algorithms.
Many researchers try to construct sparse convolution kernels by pruning the weight of the network, so as to optimize the storage space occupied by the model. As early as around 1990, both @cite_12 and @cite_31 pruned the network parameters based on the second-order derivative, but this method has a high computational complexity. In @cite_17 , @cite_33 , the author regularize neural network parameters by group Lasso penalty leading to sparsity on a group level. In @cite_14 , the author judges the importance of parameters according to their value, and then prune the unimportant parameters. The @cite_4 combine the methods in @cite_14 with quantization, Huffman encoding, and achieve maximum compression of CNNs. @cite_21 regularize neural network parameters by group Lasso penalty leading to sparsity on a group level. In order to prevent overpruning, @cite_10 proposed a parameter recovery mechanism. By pruning the parameters, a sparse model can be constructed. This kind of method can compress the model storage. Because the application of these pruned models always depend on specific libraries, computational optimization is not sufficient. So in the past two years, many researchers have turned their attention to pruning filters.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_21", "@cite_31", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2963674932", "2119144962", "2460144244", "2495425901", "2114766824", "2963981420", "2125389748", "2963000224" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the networks connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms.", "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108x and 17.7x respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https: github.com yiwenguo Dynamic-Network-Surgery.", "We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ." ] }
1901.08455
2913465187
Pre-training of models in pruning algorithms plays an important role in pruning decision-making. We find that excessive pre-training is not necessary for pruning algorithms. According to this idea, we propose a pruning algorithm---Incremental pruning based on less training (IPLT). Compared with the traditional pruning algorithm based on a large number of pre-training, IPLT has competitive compression effect than the traditional pruning algorithm under the same simple pruning strategy. On the premise of ensuring accuracy, IPLT can achieve 8x-9x compression for VGG-19 on CIFAR-10 and only needs to pre-train few epochs. For VGG-19 on CIFAR-10, we can not only achieve 10 times test acceleration, but also about 10 times training acceleration. At present, the research mainly focuses on the compression and acceleration in the application stage of the model, while the compression and acceleration in the training stage are few. We newly proposed a pruning algorithm that can compress and accelerate in the training stage. It is novel to consider the amount of pre-training required by pruning algorithm. Our results have implications: Too much pre-training may be not necessary for pruning algorithms.
In the past two years, there has been a lot of work about filters pruning algorithms. Most papers use certain criteria to evaluate filters, and ultimately prune unimportant filters. In 2017, @cite_25 try to use @math to select unimportant filters. @cite_18 uses the scaling factor @math in batch normalization as an important factor, that is, the smaller the @math is, the less important the corresponding channel is, so that filters can be pruned. @cite_0 proposes a Taylor expansion based pruning criterion to approximate the change in the cost function induced by pruning. In addition to pruning filters through specific criteria, some researchers also proposed new ideas. @cite_19 proposed utilizing a long short-term memory (LSTM) to learn the hierarchical characteristics of a network and generate a pruning decision for each layer. @cite_20 proposed a model pruning technique that focuses on simplifying the computation graph of a deep convolutional neural network. In @cite_2 , the author proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks.
{ "cite_N": [ "@cite_18", "@cite_0", "@cite_19", "@cite_2", "@cite_25", "@cite_20" ], "mid": [ "2962851801", "2553910756", "2808217246", "2951977814", "", "2786054724" ], "abstract": [ "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.", "", "Recent years have witnessed the great success of convolutional neural networks (CNNs) in many related fields. However, its huge model size and computation complexity bring in difficulty when deploying CNNs in some scenarios, like embedded system with low computation power. To address this issue, many works have been proposed to prune filters in CNNs to reduce computation. However, they mainly focus on seeking which filters are unimportant in a layer and then prune filters layer by layer or globally. In this paper, we argue that the pruning order is also very significant for model pruning. We propose a novel approach to figure out which layers should be pruned in each step. First, we utilize a long short-term memory (LSTM) to learn the hierarchical characteristics of a network and generate a pruning decision for each layer, which is the main difference from previous works. Next, a channel-based method is adopted to evaluate the importance of filters in a to-be-pruned layer, followed by an accelerated recovery step. Experimental results demonstrate that our approach is capable of reducing 70.1 FLOPs for VGG and 47.5 for Resnet-56 with comparable accuracy. Also, the learning results seem to reveal the sensitivity of each network layer.", "This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pre-trained model. Large capacity enables SFP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42 FLOPs on ResNet-101 with even 0.2 top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL", "", "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance." ] }
1901.08455
2913465187
Pre-training of models in pruning algorithms plays an important role in pruning decision-making. We find that excessive pre-training is not necessary for pruning algorithms. According to this idea, we propose a pruning algorithm---Incremental pruning based on less training (IPLT). Compared with the traditional pruning algorithm based on a large number of pre-training, IPLT has competitive compression effect than the traditional pruning algorithm under the same simple pruning strategy. On the premise of ensuring accuracy, IPLT can achieve 8x-9x compression for VGG-19 on CIFAR-10 and only needs to pre-train few epochs. For VGG-19 on CIFAR-10, we can not only achieve 10 times test acceleration, but also about 10 times training acceleration. At present, the research mainly focuses on the compression and acceleration in the application stage of the model, while the compression and acceleration in the training stage are few. We newly proposed a pruning algorithm that can compress and accelerate in the training stage. It is novel to consider the amount of pre-training required by pruning algorithm. Our results have implications: Too much pre-training may be not necessary for pruning algorithms.
In addition to the above papers, some researchers @cite_17 , @cite_9 have proposed algorithms that can be used to prune both parameters and filters.
{ "cite_N": [ "@cite_9", "@cite_17" ], "mid": [ "566555209", "2963000224" ], "abstract": [ "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ." ] }
1901.08455
2913465187
Pre-training of models in pruning algorithms plays an important role in pruning decision-making. We find that excessive pre-training is not necessary for pruning algorithms. According to this idea, we propose a pruning algorithm---Incremental pruning based on less training (IPLT). Compared with the traditional pruning algorithm based on a large number of pre-training, IPLT has competitive compression effect than the traditional pruning algorithm under the same simple pruning strategy. On the premise of ensuring accuracy, IPLT can achieve 8x-9x compression for VGG-19 on CIFAR-10 and only needs to pre-train few epochs. For VGG-19 on CIFAR-10, we can not only achieve 10 times test acceleration, but also about 10 times training acceleration. At present, the research mainly focuses on the compression and acceleration in the application stage of the model, while the compression and acceleration in the training stage are few. We newly proposed a pruning algorithm that can compress and accelerate in the training stage. It is novel to consider the amount of pre-training required by pruning algorithm. Our results have implications: Too much pre-training may be not necessary for pruning algorithms.
There is a work @cite_2 trying to combine training with pruning. In this paper, models are pruned in a soft mode. The biggest difference between us is that we actually prune some filters of models, and @cite_2 only add a mask to the parameters or filters, temporarily excluding the parameters or filters from the forward operation, in fact, these parameters are still updated. @cite_2 belongs to the second kind of pruning algorithms. can save the computational complexity in training stage, but @cite_2 can't. Apparently, is different from @cite_2 . But in a way, we think is @cite_2 combined with the thought of pruning based on less training.
{ "cite_N": [ "@cite_2" ], "mid": [ "2951977814" ], "abstract": [ "This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pre-trained model. Large capacity enables SFP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42 FLOPs on ResNet-101 with even 0.2 top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL" ] }
1901.08150
2911251106
Recently, graph neural networks have attracted great attention and achieved prominent performance in various research fields. Most of those algorithms have assumed pairwise relationships of objects of interest. However, in many real applications, the relationships between objects are in higher-order, beyond a pairwise formulation. To efficiently learn deep embeddings on the high-order graph-structured data, we introduce two end-to-end trainable operators to the family of graph neural networks, i.e., hypergraph convolution and hypergraph attention. Whilst hypergraph convolution defines the basic formulation of performing convolution on a hypergraph, hypergraph attention further enhances the capacity of representation learning by leveraging an attention module. With the two operators, a graph neural network is readily extended to a more flexible model and applied to diverse applications where non-pairwise relationships are observed. Extensive experimental results with semi-supervised node classification demonstrate the effectiveness of hypergraph convolution and hypergraph attention.
Graph Neural Network (GNN) is a methodology for learning deep models or embeddings on graph-structured data, which was first proposed by @cite_13 . One key aspect in GNN is to define the convolutional operator in the graph domain. @cite_56 firstly define convolution in the Fourier domain using the graph Laplacian matrix, and generate non-spatially localized filters with potentially intense computations. @cite_19 enable the spectral filters spatially localized using a parameterization with smooth coefficients. @cite_35 focus on the efficiency issue and use a Chebyshev expansion of the graph Laplacian to avoid an explicit use of the graph Fourier basis. @cite_22 further simplify the filtering by only using the first-order neighbors and propose Graph Convolutional Network (GCN), which has demonstrated impressive performance in both efficiency and effectiveness with semi-supervised classification tasks.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_56", "@cite_19", "@cite_13" ], "mid": [ "2964321699", "2519887557", "1662382123", "637153065", "2116341502" ], "abstract": [ "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities." ] }
1901.08150
2911251106
Recently, graph neural networks have attracted great attention and achieved prominent performance in various research fields. Most of those algorithms have assumed pairwise relationships of objects of interest. However, in many real applications, the relationships between objects are in higher-order, beyond a pairwise formulation. To efficiently learn deep embeddings on the high-order graph-structured data, we introduce two end-to-end trainable operators to the family of graph neural networks, i.e., hypergraph convolution and hypergraph attention. Whilst hypergraph convolution defines the basic formulation of performing convolution on a hypergraph, hypergraph attention further enhances the capacity of representation learning by leveraging an attention module. With the two operators, a graph neural network is readily extended to a more flexible model and applied to diverse applications where non-pairwise relationships are observed. Extensive experimental results with semi-supervised node classification demonstrate the effectiveness of hypergraph convolution and hypergraph attention.
Meanwhile, some spatial algorithms directly perform convolution on the graph. For instance, @cite_14 learn different parameters for nodes with different degrees, then average the intermediate embeddings over the neighborhood structures. @cite_63 propose the PATCHY-SAN architecture, which selects a fixed-length sequence of nodes as the receptive field and generate local normalized neighborhood representations for each of the nodes in the sequence. @cite_39 demonstrate that diffusion-based representations can serve as an effective basis for node classification. @cite_2 further explore a joint usage of diffusion and adjacency basis in a dual graph convolutional network. @cite_10 defines a unified framework via a message passing function, where each vertex sends messages based on its states and updates the states based on the message of its immediate neighbors. @cite_15 propose GraphSAGE, which customizes three aggregating functions, , element-wise mean, long short-term memory and pooling, to learn embeddings in an inductive setting.
{ "cite_N": [ "@cite_14", "@cite_39", "@cite_63", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "2964113829", "2963984147", "2964145825", "2788284887", "2962767366", "2952254971" ], "abstract": [ "We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.", "We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "The problem of extracting meaningful data through graph analysis spans a range of different fields, such as the internet, social networks, biological networks, and many others. The importance of being able to effectively mine and learn from such data continues to grow as more and more structured data become available. In this paper, we present a simple and scalable semi-supervised learning method for graph-structured data in which only a very small portion of the training data are labeled. To sufficiently embed the graph knowledge, our method performs graph convolution from different views of the raw data. In particular, a dual graph convolutional neural network method is devised to jointly consider the two essential assumptions of semi-supervised learning: (1) local consistency and (2) global consistency. Accordingly, two convolutional neural networks are devised to embed the local-consistency-based and global-consistency-based knowledge, respectively. Given the different data transformations from the two networks, we then introduce an unsupervised temporal loss function for the ensemble. In experiments using both unsupervised and supervised loss functions, our method outperforms state-of-the-art techniques on different datasets.", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels." ] }
1901.08150
2911251106
Recently, graph neural networks have attracted great attention and achieved prominent performance in various research fields. Most of those algorithms have assumed pairwise relationships of objects of interest. However, in many real applications, the relationships between objects are in higher-order, beyond a pairwise formulation. To efficiently learn deep embeddings on the high-order graph-structured data, we introduce two end-to-end trainable operators to the family of graph neural networks, i.e., hypergraph convolution and hypergraph attention. Whilst hypergraph convolution defines the basic formulation of performing convolution on a hypergraph, hypergraph attention further enhances the capacity of representation learning by leveraging an attention module. With the two operators, a graph neural network is readily extended to a more flexible model and applied to diverse applications where non-pairwise relationships are observed. Extensive experimental results with semi-supervised node classification demonstrate the effectiveness of hypergraph convolution and hypergraph attention.
Moreover, some other works concentrate on gate mechanism @cite_36 , skip connection @cite_64 , jumping connection @cite_4 , attention mechanism @cite_23 , sampling strategy @cite_18 @cite_54 , hierarchical representation @cite_16 , generative models @cite_51 @cite_42 , adversarial attack @cite_17 , . As a thorough review is simply unfeasible due to the space limitation, we refer interested readers to surveys for more representative methods. For example, @cite_57 and @cite_1 present two systematical and comprehensive surveys over a series of variants of graph neural networks. @cite_25 provide a review of geometric deep learning. @cite_45 generalize and extend various approaches and show how graph neural networks can support relational reasoning and combinatorial generalization. @cite_40 particularly focus on the attention models for graphs, and introduce three intuitive taxonomies. @cite_24 propose a unified framework called MoNet, which summarizes Geodesic CNN @cite_8 , Anisotropic CNN @cite_50 , GCN @cite_22 and Diffusion CNN @cite_39 as its special cases.
{ "cite_N": [ "@cite_64", "@cite_22", "@cite_36", "@cite_54", "@cite_42", "@cite_18", "@cite_4", "@cite_8", "@cite_39", "@cite_23", "@cite_17", "@cite_57", "@cite_40", "@cite_50", "@cite_16", "@cite_25", "@cite_1", "@cite_24", "@cite_45", "@cite_51" ], "mid": [ "2963920355", "2519887557", "2950898568", "2963581908", "2964271403", "2963695795", "2804057010", "2963021451", "2963984147", "2963858333", "2803678876", "2904900486", "2883803180", "2963425704", "2951659295", "2558748708", "2905224888", "2558460151", "2805516822", "2951101948" ], "abstract": [ "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than non-collective classifiers, collective classification is computationally challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multi-relational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) long-range, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient with linear complexity in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all of these applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (, 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.", "", "", "The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. Such a model, however, is transductive in nature because parameters are learned through convolutions with both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.", "Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of \"neighboring\" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.", "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we introduce Geodesic Convolutional Neural Networks (GCNN), a generalization of the convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract \"patches\", which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape features, allowing to achieve state-of-the-art performance in problems such as shape description, retrieval, and correspondence.", "We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.", "We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).", "Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. Also, variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.", "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.", "Graph-structured data arise naturally in many different application domains. By representing data as graphs, we can capture entities (i.e., nodes) as well as their relationships (i.e., edges) with each other. Many useful insights can be derived from graph-structured data as demonstrated by an ever-growing body of work focused on graph mining. However, in the real-world, graphs can be both large - with many complex patterns - and noisy which can pose a problem for effective graph mining. An effective way to deal with this issue is to incorporate \"attention\" into graph mining solutions. An attention mechanism allows a method to focus on task-relevant parts of the graph, helping it to make better decisions. In this work, we conduct a comprehensive and focused survey of the literature on the emerging field of graph attention models. We introduce three intuitive taxonomies to group existing work. These are based on problem setting (type of input and output), the type of attention mechanism used, and the task (e.g., graph classification, link prediction, etc.). We motivate our taxonomies through detailed examples and use each to survey competing approaches from a unique standpoint. Finally, we highlight several challenges in the area and discuss promising directions for future work.", "Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to non-Euclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in very challenging settings, achieving state-of-the-art results on some of the most difficult recent correspondence benchmarks.", "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10 accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.", "Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them.", "Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with arbitrary depth. Although the primitive GNNs have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on variants of graph neural networks such as graph convolutional network (GCN), graph attention network (GAT), gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.", "Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.", "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between \"hand-engineering\" and \"end-to-end\" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.", "Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences. However, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. Here we propose GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far. In order to quantitatively evaluate the performance of GraphRNN, we introduce a benchmark suite of datasets, baselines and novel evaluation metrics based on Maximum Mean Discrepancy, which measure distances between sets of graphs. Our experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models." ] }
1901.08150
2911251106
Recently, graph neural networks have attracted great attention and achieved prominent performance in various research fields. Most of those algorithms have assumed pairwise relationships of objects of interest. However, in many real applications, the relationships between objects are in higher-order, beyond a pairwise formulation. To efficiently learn deep embeddings on the high-order graph-structured data, we introduce two end-to-end trainable operators to the family of graph neural networks, i.e., hypergraph convolution and hypergraph attention. Whilst hypergraph convolution defines the basic formulation of performing convolution on a hypergraph, hypergraph attention further enhances the capacity of representation learning by leveraging an attention module. With the two operators, a graph neural network is readily extended to a more flexible model and applied to diverse applications where non-pairwise relationships are observed. Extensive experimental results with semi-supervised node classification demonstrate the effectiveness of hypergraph convolution and hypergraph attention.
As analyzed above, most existing variants of GNN assume pairwise relationships between objects, while our work operates on a high-order hypergraph @cite_33 @cite_6 where the between-object relationships are beyond pairwise. Hypergraph learning methods differ in the structure of the hypergraph, , clique expansion and star expansion @cite_27 , and the definition of hypergraph Laplacians @cite_46 @cite_26 @cite_12 . Following @cite_35 , @cite_3 propose a hypergraph neural network using a Chebyshev expansion of the graph Laplacian . By analyzing the incident structure of a hypergraph, our work directly defines two differentiable operators, , hypergraph convolution and hypergraph attention, which is intuitive and flexible in learning more discriminative deep embeddings.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_33", "@cite_6", "@cite_3", "@cite_27", "@cite_46", "@cite_12" ], "mid": [ "2964321699", "2086109639", "", "2962935106", "2892880750", "2148070710", "2034204728", "2170057991" ], "abstract": [ "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.", "We use the generalization of the Laplacian matrix to hypergraphs to obtain several spectral-like results on hypergraphs. For instance, we obtain upper bounds on the eccentricity and the excess of any vertex of hypergraphs. We extend to the case of hypergraphs the concepts of walk regularity and spectral regularity, showing that all walk-regular hypergraphs are spectrally-regular. Finally, we obtain an upper bound on the mean distance of walk-regular hypergraphs that involves all the Laplacian spectrum.", "", "Hypergraph partitioning is an important problem in machine learning, computer vision and network analytics. A widely used method for hypergraph partitioning relies on minimizing a normalized sum of the costs of partitioning hyperedges across clusters. Algorithmic solutions based on this approach assume that different partitions of a hyperedge incur the same cost. However, this assumption fails to leverage the fact that different subsets of vertices within the same hyperedge may have different structural importance. We hence propose a new hypergraph clustering technique, termed inhomogeneous hypergraph partitioning, which assigns different costs to different hyperedge cuts. We prove that inhomogeneous partitioning produces a quadratic approximation to the optimal solution if the inhomogeneous costs satisfy submodularity constraints. Moreover, we demonstrate that inhomogenous partitioning offers significant performance improvements in applications such as structure learning of rankings, subspace segmentation and motif clustering.", "In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-theart methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.", "This paper presents a new spectral partitioning formulation which directly incorporates vertex size information by modifying the Laplacian of the graph. Modifying the Laplacian produces a generalized eigenvalue problem, which is reduced to the standard eigenvalue problem. Experiments show that the scaled ratio-cut costs of results on benchmarks with arbitrary vertex size improve by 22 when the eigenvectors of the Laplacian in the spectral partitioner KP are replaced by the eigenvectors of our modified Laplacian. The inability to handle vertex sizes in the spectral partitioning formulation has been a limitation in applying spectral partitioning in a multilevel setting. We investigate whether our new formulation effectively removes this limitation by combining it with a simple multilevel bottom-up clustering algorithm and an iterative improvement algorithm for partition refinement. Experiments show that in a multilevel setting where the spectral partitioner KP provides the initial partitions of the most contracted graph, using the modified Laplacian in place of the standard Laplacian is more efficient and more effective in the partitioning of graphs with arbitrary-size and unit-size vertices; average improvements of 17 and 18 are observed for graphs with arbitrary-size and unit-size vertices, respectively. Comparisons with other ratio-cut based partitioners on hypergraphs with unit-size as well as arbitrary-size vertices, show that the multilevel spectral partitioner produces either better results or almost identical results more efficiently.", "Abstract We would like to classify the vertices of a hypergraph in the way that ‘similar’ vertices (those having many incident edges in common) belong to the same cluster. The problem is formulated as follows: given a connected hypergraph on n vertices and fixing the integer k (1 k ⩽ n ), we are looking for k -partition of the set of vertices such that the edges of the corresponding cut-set be as few as possible. We introduce some combinatorial measures characterizing this structural property and give upper and lower bounds for them by means of the k smallest eigenvalues of the hypergraph. For this purpose the notion of spectra of hypergraphs — which is the generalization of C -spectra of graphs — is also introduced together with k-dimensional Euclidean representations . We shall that the existence of k 'small' eigenvalues is a necessary but not sufficient condition for the existence of a good clustering. In addition the representatives of the vertices in an optimal k -dimensional Euclidean representation of the hypergraph should be well separated by means of their Euclidean distances. In this case the k -partition giving the optimal clustering is also obtained by this classification method.", "We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pair-wise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs." ] }
1901.08128
2915044648
Vision-based deep reinforcement learning (RL) typically obtains performance benefit by using high capacity and relatively large convolutional neural networks (CNN). However, a large network leads to higher inference costs (power, latency, silicon area, MAC count). Many inference optimizations have been developed for CNNs. Some optimization techniques offer theoretical efficiency, such as sparsity, but designing actual hardware to support them is difficult. On the other hand, distillation is a simple general-purpose optimization technique which is broadly applicable for transferring knowledge from a trained, high capacity teacher network to an untrained, low capacity student network. DQN distillation extended the original distillation idea to transfer information stored in a high performance, high capacity teacher Q-function trained via the Deep Q-Learning (DQN) algorithm. Our work adapts the DQN distillation work to the actor-critic Proximal Policy Optimization algorithm. PPO is simple to implement and has much higher performance than the seminal DQN algorithm. We show that a distilled PPO student can attain far higher performance compared to a DQN teacher. We also show that a low capacity distilled student is generally able to outperform a low capacity agent that directly trains in the environment. Finally, we show that distillation, followed by "fine-tuning" in the environment, enables the distilled PPO student to achieve parity with teacher performance. In general, the lessons learned in this work should transfer to other modern actor-critic RL algorithms.
Distillation was proposed in @cite_2 as a method to transfer knowledge from a trained teacher classifier neural network into an untrained student network. There are various techniques to implement distillation, and here we review the version most relevant to RL. Initially, assume a high capacity teacher classifier network has been trained to high performance, and a smaller network is to be trained with distillation. Additionally, assume access to the training inputs @math used for teacher training, but no access to class @math training labels @math . In this case, we may derive a loss function for the student network by providing training inputs @math to the teacher network and using its class probability distribution @math as a soft target for the student network's output probability distribution @math , where the student is parameterized by @math . The student's loss is defined as the distance between distributions @math and @math and may be measured using a standard metric, such as Kullback-Leibler divergence: where @math and @math represent the probability for class @math , given input @math . The gradient of @math may then be taken with respect to the student's parameters, which may then be updated using gradient descent.
{ "cite_N": [ "@cite_2" ], "mid": [ "1821462560" ], "abstract": [ "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." ] }
1901.08128
2915044648
Vision-based deep reinforcement learning (RL) typically obtains performance benefit by using high capacity and relatively large convolutional neural networks (CNN). However, a large network leads to higher inference costs (power, latency, silicon area, MAC count). Many inference optimizations have been developed for CNNs. Some optimization techniques offer theoretical efficiency, such as sparsity, but designing actual hardware to support them is difficult. On the other hand, distillation is a simple general-purpose optimization technique which is broadly applicable for transferring knowledge from a trained, high capacity teacher network to an untrained, low capacity student network. DQN distillation extended the original distillation idea to transfer information stored in a high performance, high capacity teacher Q-function trained via the Deep Q-Learning (DQN) algorithm. Our work adapts the DQN distillation work to the actor-critic Proximal Policy Optimization algorithm. PPO is simple to implement and has much higher performance than the seminal DQN algorithm. We show that a distilled PPO student can attain far higher performance compared to a DQN teacher. We also show that a low capacity distilled student is generally able to outperform a low capacity agent that directly trains in the environment. Finally, we show that distillation, followed by "fine-tuning" in the environment, enables the distilled PPO student to achieve parity with teacher performance. In general, the lessons learned in this work should transfer to other modern actor-critic RL algorithms.
Distillation has also proven to be useful for neuromorphic hardware design. For example, the benefits of better sample efficiency and higher student performance through distillation were combined in @cite_7 for efficient RL policy development. In this work, a high capacity policy trained with Double DQN, and represented by a standard convolutional neural network, was distilled into a student policy represented by a low precision spiking neural network to be executed on IBM's TrueNorth architecture. As TrueNorth has special restrictions, e.g. binary activations and ternary weights, it does not use a standard SGD algorithm. Instead TrueNorth uses the Energy-Efficient Deep Networks algorithm @cite_8 to train a student to match a teacher's Q-values. Importantly, @cite_7 demonstrates the viability of training a teacher policy once, using one type of algorithm, and distilling that policy into an arbitrary number of student policies, using the best training algorithm for each respective student.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2893221988", "2314470091" ], "abstract": [ "Low precision networks in the reinforcement learning (RL) setting are relatively unexplored because of the limitations of binary activations for function approximation. Here, in the discrete action ATARI domain, we demonstrate, for the first time, that low precision policy distillation from a high precision network provides a principled, practical way to train an RL agent. As an application, on 10 different ATARI games, we demonstrate real-time end-to-end game playing on low-power neuromorphic hardware by converting a sequence of game frames into discrete actions.", "Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames s and using between 25 and 275 mW (effectively >6,000 frames s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer." ] }
1901.08360
2913881544
State-of-the-art neural networks are vulnerable to adversarial examples; they can easily misclassify inputs that are imperceptibly different than their training and test data. In this work, we establish that the use of cross-entropy loss function and the low-rank features of the training data have responsibility for the existence of these inputs. Based on this observation, we suggest that addressing adversarial examples requires rethinking the use of cross-entropy loss function and looking for an alternative that is more suited for minimization with low-rank features. In this direction, we present a training scheme called differential training, which uses a loss function defined on the differences between the features of points from opposite classes. We show that differential training can ensure a large margin between the decision boundary of the neural network and the points in the training dataset. This larger margin increases the amount of perturbation needed to flip the prediction of the classifier and makes it harder to find an adversarial example with small perturbations. We test differential training on a binary classification task with CIFAR-10 dataset and demonstrate that it radically reduces the ratio of images for which an adversarial example could be found -- not only in the training dataset, but in the test dataset as well.
Differential training uses the differences between the features of the training points from opposite classes. This training scheme has been intentionally introduced to improve the dynamics of the gradient descent algorithm on the training cost function; and we consider it as using an alternative cost function in the sequel since the choice of cost function is very critical. However, the procedure could also be considered as using an identical pair of networks in the network architecture, which is closely related to the Siamese Networks @cite_12 @cite_3 . These networks were previously shown to perform well if limited data were available from any of the classes in a classification task @cite_14 . Our work shows that this architecture can also provide a large margin between the decision boundary of the classifier and the training points, and consequently, be more robust to adversarial examples the network is trained with the cost function we suggest in Section .
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_12" ], "mid": [ "", "2157364932", "2171590421" ], "abstract": [ "", "We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves.", "This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory." ] }
1901.08215
2914757193
A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with directed communication where each node updates asynchronously and independently of any other node. If local functions are strongly-convex with Lipschitz-continuous gradients, each node of APPG converges to the same optimal solution at a rate of @math , where @math and the virtual counter @math increases by 1 no matter on which node updates. The superior performance of APPG is validated on a logistic regression problem against state-of-the-art methods in terms of linear speedup and system implementations.
parallel optimization using a master-slave architecture has been widely adopted to train models @cite_5 . In this architecture, each slave pulls the shared parameters from the master, computes its own gradient or stochastic gradient, and then pushes the gradient back to the master, where gradients from all slaves are aggregated to update parameters. This process can be either synchronous @cite_15 or asynchronous @cite_22 @cite_1 . However, there are two main drawbacks: 1) the bandwidth bottleneck limits its scalability to large-scale networks, and 2) the system stops working if the master breaks down.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_1", "@cite_22" ], "mid": [ "2130062883", "2164278908", "2626580042", "2949585412" ], "abstract": [ "Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.", "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.", "With the fast development of deep learning, it has become common to learn big neural networks using massive training data. Asynchronous Stochastic Gradient Descent (ASGD) is widely adopted to fulfill this task for its efficiency, which is, however, known to suffer from the problem of delayed gradients. That is, when a local worker adds its gradient to the global model, the global model may have been updated by other workers and this gradient becomes \"delayed\". We propose a novel technology to compensate this delay, so as to make the optimization behavior of ASGD closer to that of sequential SGD. This is achieved by leveraging Taylor expansion of the gradient function and efficient approximation to the Hessian matrix of the loss function. We call the new algorithm Delay Compensated ASGD (DC-ASGD). We evaluated the proposed algorithm on CIFAR-10 and ImageNet datasets, and the experimental results demonstrate that DC-ASGD outperforms both synchronous SGD and asynchronous SGD, and nearly approaches the performance of sequential SGD.", "Mini-batch optimization has proven to be a powerful paradigm for large-scale learning. However, the state of the art parallel mini-batch algorithms assume synchronous operation or cyclic update orders. When worker nodes are heterogeneous (due to different computational capabilities or different communication delays), synchronous and cyclic operations are inefficient since they will leave workers idle waiting for the slower nodes to complete their computations. In this paper, we propose an asynchronous mini-batch algorithm for regularized stochastic optimization problems with smooth loss functions that eliminates idle waiting and allows workers to run at their maximal update rates. We show that by suitably choosing the step-size values, the algorithm achieves a rate of the order @math for general convex regularization functions, and the rate @math for strongly convex regularization functions, where @math is the number of iterations. In both cases, the impact of asynchrony on the convergence rate of our algorithm is asymptotically negligible, and a near-linear speedup in the number of workers can be expected. Theoretical results are confirmed in real implementations on a distributed computing infrastructure." ] }
1901.08215
2914757193
A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with directed communication where each node updates asynchronously and independently of any other node. If local functions are strongly-convex with Lipschitz-continuous gradients, each node of APPG converges to the same optimal solution at a rate of @math , where @math and the virtual counter @math increases by 1 no matter on which node updates. The superior performance of APPG is validated on a logistic regression problem against state-of-the-art methods in terms of linear speedup and system implementations.
To overcome these issues, parallel optimization over a peer-to-peer network is an attractive alternative and allows each node to talk with only a subset of nodes. Under a connected graph, each node maintains a local copy of the training model and updates it by using its own gradient or stochastic gradient and the information received from its neighbors, after which the updated model is sent to neighbors. In general, each node only talks with a few number of neighbors even in a large-scale network, which makes it very scalable and robust. It has also been widely studied in the control community, see e.g. for a comprehensive review, and novel algorithms such as DGD @cite_6 , DDA @cite_8 and EXTRA @cite_9 have been developed. Recently, it has been demonstrated faster convergence in decentralized training of machine learning models with stochastic gradients, such as D-PSGD @cite_13 , MSDA , MSPD @cite_21 , COLA @cite_10 and D @math @cite_20 , which are proposed only for networks.
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_9", "@cite_21", "@cite_6", "@cite_13", "@cite_20" ], "mid": [ "2120293976", "2892209803", "1571416372", "2962756315", "2044212084", "2963228337", "2963843010" ], "abstract": [ "The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. We develop and analyze distributed algorithms based on dual averaging of subgradients, and provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis clearly separates the convergence of the optimization algorithm itself from the effects of communication constraints arising from the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks.", "Decentralized machine learning is a promising emerging technique in view of global challenges of data ownership and privacy. We consider learning of linear classification and regression models, in the setting where the training data is decentralized over many user devices, and the learning algorithm must run on-device, on an arbitrary communication network, without a central coordinator. We propose COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance. Our scheme overcomes many limitations of existing methods in the distributed setting, and achieves communication efficiency, scalability, as well as elasticity and resilience to changes in user's data and participating devices.", "Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem @math which is defined over a connected network of @math agents, where each function @math is held privately by agent @math and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. “Exact” means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every a...", "In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in O(1 √ t ), the structure of the communication network only impacts a second-order term in O(1 t), where t is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a d1 4 multiplicative factor of the optimal convergence rate, where d is the underlying dimension.", "We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.", "Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.", "" ] }
1901.08215
2914757193
A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with directed communication where each node updates asynchronously and independently of any other node. If local functions are strongly-convex with Lipschitz-continuous gradients, each node of APPG converges to the same optimal solution at a rate of @math , where @math and the virtual counter @math increases by 1 no matter on which node updates. The superior performance of APPG is validated on a logistic regression problem against state-of-the-art methods in terms of linear speedup and system implementations.
There are some algorithms for directed networks @cite_16 @cite_17 where fast nodes can only start to compute updates after waiting for slow nodes, which results in much idle time and thus makes it less efficient in large networks. The AllReduce based decentralized algorithms adopt a ring graph instead of a central node to aggregate gradients from @math nodes . At each iterate, a node receives information from its predecessor, and sends updated information to its successor. All nodes shall collect some global information after @math iterates. However, each iterate of AllReduce must be synchronized and thus also suffers a relatively poor scalability.
{ "cite_N": [ "@cite_16", "@cite_17" ], "mid": [ "2137435346", "2794037585" ], "abstract": [ "We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications between nodes are described by a time-varying sequence of directed graphs, which is uniformly strongly connected. For such communications, assuming that every node knows its out-degree, we develop a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires no knowledge of either the number of agents or the graph sequence to implement. Our analysis shows that the subgradient-push algorithm converges at a rate of O (ln t √t), where the constant depends on the initial values at the nodes, the subgradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.", "In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name \"push-pull gradient method\"). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks." ] }
1901.08215
2914757193
A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with directed communication where each node updates asynchronously and independently of any other node. If local functions are strongly-convex with Lipschitz-continuous gradients, each node of APPG converges to the same optimal solution at a rate of @math , where @math and the virtual counter @math increases by 1 no matter on which node updates. The superior performance of APPG is validated on a logistic regression problem against state-of-the-art methods in terms of linear speedup and system implementations.
decentralized parallel optimization solves the problem by breaking the synchronization in each iterate. proposed an asynchronous algorithm for undirected graphs. The seminal work @cite_24 and recent work @cite_4 focus on asynchronous coordinate descent algorithms. Recently, proposed an algorithm called AD-PSGD using stochastic gradients, which is an asynchronous implementation of D-PSGD @cite_13 . However, it assumes all workers have access to the whole dataset or the global dataset can be split according to update frequencies of nodes, which is restrictive in practice. Moreover, it also needs an undirected network.
{ "cite_N": [ "@cite_24", "@cite_4", "@cite_13" ], "mid": [ "2154834860", "2913431899", "2963228337" ], "abstract": [ "We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive interprocessor communications and the communication delays are not too large.", "", "Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts." ] }
1901.08227
2913259183
Recently, researchers proposed various low-precision gradient compression, for efficient communication in large-scale distributed optimization. Based on these work, we try to reduce the communication complexity from a new direction. We pursue an ideal bijective mapping between two spaces of gradient distribution, so that the mapped gradient carries greater information entropy after the compression. In our setting, all servers should share a reference gradient in advance, and they communicate via the normalized gradients, which are the subtraction or quotient, between current gradients and the reference. To obtain a reference vector that yields a stronger signal-to-noise ratio, dynamically in each iteration, we extract and fuse information from the past trajectory in hindsight, and search for an optimal reference for compression. We name this to be the trajectory-based normalized gradients (TNG). It bridges the research from different societies, like coding, optimization, systems, and learning. It is easy to implement and can universally combine with existing algorithms. Our experiments on benchmarking hard non-convex functions, convex problems like logistic regression demonstrate that TNG is more compression-efficient for communication of distributed optimization of general functions.
Researchers proposed protocols from other perspectives to reduce communication. A prevailing method is to average parameter occasionally, but not too frequent @cite_25 @cite_43 , or just one round of averaging over final parameters @cite_29 . If the problems require the servers to frequently synchronized, we can use an asynchronous protocol like parameter servers @cite_28 @cite_6 , where each server requests the latest parameter from the main server or contributes its gradients, passively or aggressively, based on the network condition; the decentralized optimization algorithms @cite_4 @cite_27 @cite_39 view every servers equally, to avoid the congestion of communication since the main server takes over most of the requests and causing unbalance. Efficiently using a large batch-size @cite_17 @cite_26 @cite_34 @cite_21 or the second-order gradient will reduce the communication since the overall number of iterations, and therefore reduce commnunication.the model synchronization can also be formulated as a global consensus problem @cite_18 with penalty of delay. Besides, the normalization idea was also used in other areas, like normalized gradient descent for general convex or quasi-convex optimization @cite_19 @cite_7 ; on different subjects, normalization helps to stablize the feature or gradient distribution in neural networks @cite_44 @cite_30 @cite_0 @cite_2 .
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_44", "@cite_43", "@cite_2", "@cite_18", "@cite_4", "@cite_21", "@cite_39", "@cite_17", "@cite_26", "@cite_7", "@cite_28", "@cite_6", "@cite_19", "@cite_27", "@cite_34", "@cite_25", "@cite_0" ], "mid": [ "", "2571425027", "1836465849", "", "", "130696423", "1616857247", "", "", "2108948681", "", "", "2132737349", "2127941149", "", "", "", "2123000508", "" ], "abstract": [ "", "We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves mean-squared error that decays as O(N-1 + (N m)-2). Whenever m ≤ √N, this guarantee matches the best possible rate achievable by a centralized algorithm having access to all N samples. The second algorithm is a novel method, based on an appropriate form of the bootstrap. Requiring only a single round of communication, it has mean-squared error that decays as O(N-1 + (N m)-3), and so is more robust to the amount of parallelization. We complement our theoretical results with experiments on large-scale problems from the internet search domain. In particular, we show that our methods efficiently solve an advertisement prediction problem from the Chinese SoSo Search Engine, which consists of N ≈ 2.4 × 108 samples and d ≥ 700,000 dimensions.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.", "", "", "Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.", "Consider the consensus problem of minimizing @math , where @math and each @math is only known to the individual agent @math in a connected network of @math agents. To solve this problem and obtain the solution, all the agents collaborate with their neighbors through information exchange. This type of decentralized computation does not need a fusion center, offers better network load balance, and improves data privacy. This paper studies the decentralized gradient descent method [A. Nedic and A. Ozdaglar, IEEE Trans. Automat. Control, 54 (2009), pp. 48--61], in which each agent @math updates its local variable @math by combining the average of its neighbors' with a local negative-gradient step @math . The method is described by the iteration @math where @math is nonzero only if @math and @math are neighbors or @math and the matrix...", "", "", "Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice.", "", "", "We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes.", "This paper describes a third-generation parameter server framework for distributed machine learning. This framework offers two relaxations to balance system performance and algorithm efficiency. We propose a new algorithm that takes advantage of this framework to solve non-convex non-smooth problems with convergence guarantees. We present an in-depth analysis of two large scale machine learning problems ranging from l1 -regularized logistic regression on CPUs to reconstruction ICA on GPUs, using 636TB of real data with hundreds of billions of samples and dimensions. We demonstrate using these examples that the parameter server framework is an effective and straightforward way to scale machine learning to larger problems and systems than have been previously achieved.", "", "", "", "We study the scalability of consensus-based distributed optimization algorithms by considering two questions: How many processors should we use for a given problem, and how often should they communicate when communication is not free? Central to our analysis is a problem-specific value r which quantifies the communication computation tradeoff. We show that organizing the communication among nodes as a k-regular expander graph [1] yields speedups, while when all pairs of nodes communicate (as in a complete graph), there is an optimal number of processors that depends on r. Surprisingly, a speedup can be obtained, in terms of the time to reach a fixed level of accuracy, by communicating less and less frequently as the computation progresses. Experiments on a real cluster solving metric learning and non-smooth convex minimization tasks demonstrate strong agreement between theory and practice.", "" ] }
1901.08280
2912135812
Time series forecasting is a crucial component of many important applications, ranging from forecasting the stock markets to energy load prediction. The high-dimensionality, velocity and variety of the data collected in these applications pose significant and unique challenges that must be carefully addressed for each of them. In this work, a novel Temporal Logistic Neural Bag-of-Features approach, that can be used to tackle these challenges, is proposed. The proposed method can be effectively combined with deep neural networks, leading to powerful deep learning models for time series analysis. However, combining existing BoF formulations with deep feature extractors pose significant challenges: the distribution of the input features is not stationary, tuning the hyper-parameters of the model can be especially difficult and the normalizations involved in the BoF model can cause significant instabilities during the training process. The proposed method is capable of overcoming these limitations by a employing a novel adaptive scaling mechanism and replacing the classical Gaussian-based density estimation involved in the regular BoF model with a logistic kernel. The effectiveness of the proposed approach is demonstrated using extensive experiments on a large-scale financial time series dataset that consists of more than 4 million limit orders.
This work is mainly related to time series analysis using the BoF model. An increasing number of recent works employ variants of the Bag-of-Features model to perform time series analysis, e.g., forecasting, retrieval, etc. @cite_34 , a BoF-based method was proposed for extracting discriminative representations by employing a discriminative objective for optimizing the codebook. A dictionary learning methods for the BoF model was also utilized in @cite_36 , in order to learn retrieval-oriented representations. A discriminant BoF approach for learning representations for action recognition was proposed in @cite_4 , while a dynemes-based one was introduced in @cite_30 . Other more recent approaches further adapt the procedure toward time series analysis, e.g., time series segments of various lengths were used in @cite_41 , to allow for efficiently handling warping, while an approach that employs temporal modeling was proposed in @cite_13 . Quite recently, a neural formulation of the BoF model was used to perform time series analysis @cite_15 , while an extension of this method, that allows for better capturing the temporal dynamics of time series, was introduced in @cite_3 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_36", "@cite_41", "@cite_3", "@cite_15", "@cite_34", "@cite_13" ], "mid": [ "2024362543", "2078474587", "2319521425", "1975257359", "2896419919", "2766117736", "2029651301", "2229602245" ], "abstract": [ "In this paper, we propose a novel method that performs dynamic action classification by exploiting the effectiveness of the Extreme Learning Machine (ELM) algorithm for single hidden layer feedforward neural networks training. It involves data grouping and ELM based data projection in multiple levels. Given a test action instance, a neural network is trained by using labeled action instances forming the groups that reside to the test sample's neighborhood. The action instances involved in this procedure are, subsequently, mapped to a new feature space, determined by the trained network outputs. This procedure is performed multiple times, which are determined by the test action instance at hand, until only a single class is retained. Experimental results denote the effectiveness of the dynamic classification approach, compared to the static one, as well as the effectiveness of the ELM in the proposed dynamic classification setting.", "Human action recognition based on Bag of Words representation.Discriminant codebook learning for better action class discrimination.Unified framework for the determination of both the optimized codebook and linear data projections. In this paper we propose a novel framework for human action recognition based on Bag of Words (BoWs) action representation, that unifies discriminative codebook generation and discriminant subspace learning. The proposed framework is able to, naturally, incorporate several (linear or non-linear) discrimination criteria for discriminant BoWs-based action representation. An iterative optimization scheme is proposed for sequential discriminant BoWs-based action representation and codebook adaptation based on action discrimination in a reduced dimensionality feature space where action classes are better discriminated. Experiments on five publicly available data sets aiming at different application scenarios demonstrate that the proposed unified approach increases the codebook discriminative ability providing enhanced action classification performance.", "In this paper, we present a supervised dictionary learning method for optimizing the feature-based Bag-of-Words (BoW) representation towards Information Retrieval. Following the cluster hypothesis, which states that points in the same cluster are likely to fulfill the same information need, we propose the use of an entropy-based optimization criterion that is better suited for retrieval instead of classification. We demonstrate the ability of the proposed method, abbreviated as EO-BoW, to improve the retrieval performance by providing extensive experiments on two multi-class image datasets. The BoW model can be applied to other domains as well, so we also evaluate our approach using a collection of 45 time-series datasets, a text dataset, and a video dataset. The gains are three-fold since the EO-BoW can improve the mean Average Precision, while reducing the encoding time and the database storage requirements. Finally, we provide evidence that the EO-BoW maintains its representation ability even when used to retrieve objects from classes that were not seen during the training.", "Time series classification is an important task with many challenging applications. A nearest neighbor (NN) classifier with dynamic time warping (DTW) distance is a strong solution in this context. On the other hand, feature-based approaches have been proposed as both classifiers and to provide insight into the series, but these approaches have problems handling translations and dilations in local patterns. Considering these shortcomings, we present a framework to classify time series based on a bag-of-features representation (TSBF). Multiple subsequences selected from random locations and of random lengths are partitioned into shorter intervals to capture the local information. Consequently, features computed from these subsequences measure properties at different locations and dilations when viewed from the original series. This provides a feature-based approach that can handle warping (although differently from DTW). Moreover, a supervised learner (that handles mixed data types, different units, etc.) integrates location information into a compact codebook through class probability estimates. Additionally, relevant global features can easily supplement the codebook. TSBF is compared to NN classifiers and other alternatives (bag-of-words strategies, sparse spatial sample kernels, shapelets). Our experimental results show that TSBF provides better results than competitive methods on benchmark datasets from the UCR time series database.", "Time-series forecasting has various applications in a wide range of domains, e.g., forecasting stock markets using limit order book data. Limit order book data provide much richer information about the behavior of stocks than its price alone, but also bear several challenges, such as dealing with multiple price depths and processing very large amounts of data of high dimensionality, velocity, and variety. A well-known approach for efficiently handling large amounts of high-dimensional data is the bag-of-features (BoF) model. However, the BoF method was designed to handle multimedia data such as images. In this paper, a novel temporal-aware neural BoF model is proposed tailored to the needs of time-series forecasting using high frequency limit order book data. Two separate sets of radial basis function and accumulation layers are used in the temporal BoF to capture both the short-term behavior and the long-term dynamics of time series. This allows for modeling complex temporal phenomena that occur in time-series data and further increase the forecasting ability of the model. Any other neural layer, such as feature transformation layers, or classifiers, such as multilayer perceptrons, can be combined with the proposed deep learning approach, which can be trained end-to-end using the back-propagation algorithm. The effectiveness of the proposed method is validated using a large-scale limit order book dataset, containing over 4.5 million limit orders, and it is demonstrated that it greatly outperforms all the other evaluated methods.", "Classification of time-series data is a challenging problem with many real-world applications, ranging from identifying medical conditions from electroencephalography (EEG) measurements to forecasting the stock market. The well known Bag-of-Features (BoF) model was recently adapted towards time-series representation. In this work, a neural generalization of the BoF model, composed of an RBF layer and an accumulation layer, is proposed as a neural layer that receives the features extracted from a time-series and gradually builds its representation. The proposed method can be combined with any other layer or classifier, such as fully connected layers or feature transformation layers, to form deep neural networks for time-series classification. The resulting networks are end-to-end differentiable and they can be trained using regular back-propagation. It is demonstrated, using two time-series datasets, including a large-scale financial dataset, that the proposed approach can significantly increase the classification metrics over other baseline and state-of-the-art techniques.", "In this paper, we present a novel method aiming at multidimensional sequence classification. We propose a novel sequence representation, based on its fuzzy distances from optimal representative signal instances, called statemes. We also propose a novel modified clustering discriminant analysis algorithm minimizing the adopted criterion with respect to both the data projection matrix and the class representation, leading to the optimal discriminant sequence class representation in a low-dimensional space, respectively. Based on this representation, simple classification algorithms, such as the nearest subclass centroid, provide high classification accuracy. A three step iterative optimization procedure for choosing statemes, optimal discriminant subspace and optimal sequence class representation in the final decision space is proposed. The classification procedure is fast and accurate. The proposed method has been tested on a wide variety of multidimensional sequence classification problems, including handwritten character recognition, time series classification and human activity recognition, providing very satisfactory classification results.", "Time series classification is an application of particular interest with the increase of data to monitor. Classical techniques for time series classification rely on point-to-point distances. Recently, Bag-of-Words approaches have been used in this context. Words are quantized versions of simple features extracted from sliding windows. The SIFT framework has proved efficient for image classification. In this paper, we design a time series classification scheme that builds on the SIFT framework adapted to time series to feed a Bag-of-Words. Experimental results show competitive performance with respect to classical techniques." ] }
1901.08280
2912135812
Time series forecasting is a crucial component of many important applications, ranging from forecasting the stock markets to energy load prediction. The high-dimensionality, velocity and variety of the data collected in these applications pose significant and unique challenges that must be carefully addressed for each of them. In this work, a novel Temporal Logistic Neural Bag-of-Features approach, that can be used to tackle these challenges, is proposed. The proposed method can be effectively combined with deep neural networks, leading to powerful deep learning models for time series analysis. However, combining existing BoF formulations with deep feature extractors pose significant challenges: the distribution of the input features is not stationary, tuning the hyper-parameters of the model can be especially difficult and the normalizations involved in the BoF model can cause significant instabilities during the training process. The proposed method is capable of overcoming these limitations by a employing a novel adaptive scaling mechanism and replacing the classical Gaussian-based density estimation involved in the regular BoF model with a logistic kernel. The effectiveness of the proposed approach is demonstrated using extensive experiments on a large-scale financial time series dataset that consists of more than 4 million limit orders.
In contrast with @cite_3 , in this work a logistic Neural BoF formulation is used. This allows for training temporal BoF models without using any sophisticated initialization schemes and or carefully tuning any hyper-parameter, e.g., the initial scaling factor of the kernel function that was employed in @cite_3 . Furthermore, in this work, we studied behavior of the BoF model when combined with deep feature extractors and we appropriately designed an adaptive scaling method that allows for the smooth flow of information in deep BoF-based architectures. To the best of our knowledge, this is the first work in which a deep temporal formulation of the BoF model is used with deep feature extraction layers, after appropriately adapting it to the needs of the specific application, demonstrating that it is indeed possible to learn powerful deep learning models for time series analysis that outperform other competitive state-of-the-art methods.
{ "cite_N": [ "@cite_3" ], "mid": [ "2896419919" ], "abstract": [ "Time-series forecasting has various applications in a wide range of domains, e.g., forecasting stock markets using limit order book data. Limit order book data provide much richer information about the behavior of stocks than its price alone, but also bear several challenges, such as dealing with multiple price depths and processing very large amounts of data of high dimensionality, velocity, and variety. A well-known approach for efficiently handling large amounts of high-dimensional data is the bag-of-features (BoF) model. However, the BoF method was designed to handle multimedia data such as images. In this paper, a novel temporal-aware neural BoF model is proposed tailored to the needs of time-series forecasting using high frequency limit order book data. Two separate sets of radial basis function and accumulation layers are used in the temporal BoF to capture both the short-term behavior and the long-term dynamics of time series. This allows for modeling complex temporal phenomena that occur in time-series data and further increase the forecasting ability of the model. Any other neural layer, such as feature transformation layers, or classifiers, such as multilayer perceptrons, can be combined with the proposed deep learning approach, which can be trained end-to-end using the back-propagation algorithm. The effectiveness of the proposed method is validated using a large-scale limit order book dataset, containing over 4.5 million limit orders, and it is demonstrated that it greatly outperforms all the other evaluated methods." ] }
1907.09014
2962851388
Sudden changes in the dynamics of robotic tasks, such as contact with an object or the latching of a door, are often viewed as inconvenient discontinuities that make manipulation difficult. However, when these transitions are well-understood, they can be leveraged to reduce uncertainty or aid manipulation---for example, wiggling a screw to determine if it is fully inserted or not. Current model-free reinforcement learning approaches require large amounts of data to learn to leverage such dynamics, scale poorly as problem complexity grows, and do not transfer well to significantly different problems. By contrast, hierarchical planning-based methods scale well via plan decomposition and work well on a wide variety of problems, but often rely on precise hand-specified models and task decompositions. To combine the advantages of these opposing paradigms, we propose a new method, Act-CHAMP, which (1) learns hybrid kinematics models of objects from unsegmented data, (2) leverages actions, in addition to states, to outperform a state-of-the-art observation-only inference method, and (3) does so in a manner that is compatible with efficient, hierarchical POMDP planning. Beyond simply coping with challenging dynamics, we show that our end-to-end system leverages the learned kinematics to reduce uncertainty, plan efficiently, and use objects in novel ways not encountered during training.
Learning object kinematic dynamics models directly from raw visual data is a promising direction for learning object motion models. The Embed to Control (E2C) method proposed by @cite_3 uses a novel deep probabilistic generative model to convert raw image pixels into a low-dimensional latent space, in which stochastic optimal control can be applied. developed SE3-nets @cite_12 and SE-3Pose-Nets @cite_10 to learn predictive dynamics models of object motion in a scene from input point-cloud data and applied action vectors which can be used to directly perform robot visuomotor control from input point cloud data. While deep neural network-based approaches have shown much potential, the biggest hurdle in using such approaches to a wide variety of real-world robotics tasks is the need for a vast amount of training data, which is often not readily available. Also, these approaches tend to transfer poorly to new tasks. In this work, we combine model learning with generalizable planning under uncertainty to address these challenges, though deep learning methods may be useful in future work, in place of our simplified perception system.
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_3" ], "mid": [ "2890290306", "2963149945", "2963430173" ], "abstract": [ "", "We introduce SE3-Nets which are deep neural networks designed to model and learn rigid body motion from raw point cloud data. Based only on sequences of depth images along with action vectors and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE(3) transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks. Additional experiments with a depth camera observing a Baxter robot pushing objects on a table show that SE3-Nets also work well on real data.", "We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems." ] }
1907.09014
2962851388
Sudden changes in the dynamics of robotic tasks, such as contact with an object or the latching of a door, are often viewed as inconvenient discontinuities that make manipulation difficult. However, when these transitions are well-understood, they can be leveraged to reduce uncertainty or aid manipulation---for example, wiggling a screw to determine if it is fully inserted or not. Current model-free reinforcement learning approaches require large amounts of data to learn to leverage such dynamics, scale poorly as problem complexity grows, and do not transfer well to significantly different problems. By contrast, hierarchical planning-based methods scale well via plan decomposition and work well on a wide variety of problems, but often rely on precise hand-specified models and task decompositions. To combine the advantages of these opposing paradigms, we propose a new method, Act-CHAMP, which (1) learns hybrid kinematics models of objects from unsegmented data, (2) leverages actions, in addition to states, to outperform a state-of-the-art observation-only inference method, and (3) does so in a manner that is compatible with efficient, hierarchical POMDP planning. Beyond simply coping with challenging dynamics, we show that our end-to-end system leverages the learned kinematics to reduce uncertainty, plan efficiently, and use objects in novel ways not encountered during training.
Articulation motion models can also be seen as geometric constraints imposed on two or more rigid bodies. @cite_5 have proposed a method, , to learn geometric constraints encountered in a manipulation task from non-expert human demonstrations. @cite_7 @cite_4 developed an approach to learn geometric constraints governing relative motion between objects from human demonstrations. Their proposed approach can successfully learn geometric constraints even from noisy demonstrations. However, the use of custom force-sensitive hand-held tools to record human demonstrations restricts the generalizability of the approach to a wider set of tasks.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_7" ], "mid": [ "2738190501", "", "2785710154" ], "abstract": [ "Learning from demonstrations has been shown to be a successful method for non-experts to teach manipulation tasks to robots. These methods typically build generative models from demonstrations and then use regression to reproduce skills. However, this approach has limitations to capture hard geometric constraints imposed by the task. On the other hand, while sampling and optimization-based motion planners exist that reason about geometric constraints, these are typically carefully hand-crafted by an expert. To address this technical gap, we contribute with C-LEARN, a method that learns multi-step manipulation tasks from demonstrations as a sequence of keyframes and a set of geometric constraints. The system builds a knowledge base for reaching and grasping objects, which is then leveraged to learn multi-step tasks from a single demonstration. C-LEARN supports multi-step tasks with multiple end effectors; reasons about SE(3) volumetric and CAD constraints, such as the need for two axes to be parallel; and offers a principled way to transfer skills between robots with different kinematics. We embed the execution of the learned tasks within a shared autonomy framework, and evaluate our approach by analyzing the success rate when performing physical tasks with a dual-arm Optimas robot, comparing the contribution of different constraints models, and demonstrating the ability of C-LEARN to transfer learned tasks by performing them with a legged dual-arm Atlas robot in simulation.", "", "This letter introduces a method for recognizing geometric constraints from human demonstrations using both position and force measurements. Our key idea is that position information alone is insufficient to determine that a constraint is active and reaction forces must also be considered to correctly distinguish constraints from movements that just happen to follow a particular geometric shape. Our techniques can detect multiple plane, arc, and line constraints in a single demonstration. Our method uses the principle of virtual work to determine reaction forces from force and position data. It fits geometric constraints locally and clusters these over the whole motion for global constraint recognition. Experimental evaluations compare our force and position constraint inference technique with a similar position-only technique and conclude that force measurements are essential in eliminating false positive detections of constraints in free space." ] }
1907.08914
2963176143
We consider the problem of locating a single facility on a vertex in a given graph based on agents' preferences, where the domain of the preferences is either single-peaked or single-dipped. Our main interest is the existence of deterministic social choice functions (SCFs) that are Pareto efficient and false-name-proof, i.e., resistant to fake votes. We show that regardless of whether preferences are single-peaked or single-dipped, such an SCF exists (i) for any tree graph, and (ii) for a cycle graph if and only if its length is less than six. We also show that when the preferences are single-peaked, such an SCF exists for any ladder (i.e., 2-by-m grid) graph, and does not exist for any larger hypergrid.
Moulin @cite_9 proposed generalized median voter schemes, which are the only deterministic, truthful, PE, and anonymous SCFs. Procaccia and Tennenholtz @cite_10 proposed a general framework of approximate mechanism design, which evaluates the worst case performance of truthful SCFs from the perspective of competitive ratio. Recently, some models for locating multiple heterogenous facilities have also been studied @cite_16 @cite_20 @cite_1 . @cite_5 considered the agents who dynamically arrive and depart. Some research also considered facility location on grids @cite_28 @cite_7 and cycles @cite_13 @cite_14 @cite_6 . @cite_11 overviewed applications in practical decision making.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_7", "@cite_28", "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_16", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2155975756", "2015555360", "1918562889", "318376482", "2083397025", "2808656711", "2106615115", "2807889088", "2408343736", "2108957189", "", "2116725038" ], "abstract": [ "We consider the problem of locating a facility on a network represented by a graph. A set of strategic agents have different ideal locations for the facility; the cost of an agent is the distance between its ideal location and the facility. A mechanism maps the locations reported by the agents to the location of the facility. We wish to design mechanisms that are strategyproof (SP) in the sense that agents can never benefit by lying and, at the same time, provide a small approximation ratio with respect to the minimax measure. We design a novel “hybrid” strategyproof randomized mechanism that provides a tight approximation ratio of 3 2 when the network is a circle (known as a ring in the case of computer networks). Furthermore, we show that no randomized SP mechanism can provide an approximation ratio better than 2 - o (1), even when the network is a tree, thereby matching a trivial upper bound of two.", "Consider the unit circle S^1 with distance function d measured along the circle. We show that for every selection of 2n points x\"1,...,x\"n,y\"1,...,y\"[email protected]?S^1 there exists [email protected]? 1,...,n such that @?\"k\"=\"1^nd(x\"i,x\"k)@[email protected]?\"k\"=\"1^nd(x\"i,y\"k). We also discuss a game theoretic interpretation of this result.", "This paper is devoted to the location of public facilities in a metric space. Selfish agents are located in this metric space, and their aim is to minimize their own cost, which is the distance from their location to the nearest facility. A central authority has to locate the facilities in the space, but she is ignorant of the true locations of the agents. The agents will therefore report their locations, but they may lie if they have an incentive to do it. We consider two social costs in this paper: the sum of the distances of the agents to their nearest facility, or the maximal distance of an agent to her nearest facility. We are interested in designing strategy-proof mechanisms that have a small approximation ratio for the considered social cost. A mechanism is strategy-proof if no agent has an incentive to report false information. In this paper, we design strategyproof mechanisms to locate n - 1 facilities for n agents. We study this problem in the general metric and in the tree metric spaces. We provide lower and upper bounds on the approximation ratio of deterministic and randomized strategy-proof mechanisms.", "We consider the mechanism design problem for agents with single-peaked preferences over multi-dimensional domains when multiple alternatives can be chosen. Facility location and committee selection are classic embodiments of this problem. We propose a class of percentile mechanisms, a form of generalized median mechanisms, that are strategy-proof, and derive worst-case approximation ratios for social cost and maximum load for L1 and L2 cost models. More importantly, we propose a sample-based framework for optimizing the choice of percentiles relative to any prior distribution over preferences, while maintaining strategy-proofness. Our empirical investigations, using social cost and maximum load as objectives, demonstrate the viability of this approach and the value of such optimized mechanisms vis-a-vis mechanisms derived through worst-case analysis.", "This paper investigates one of the possible weakening of the (too demanding) assumptions of the Gibbard-Satterthwaite theorem. Namely we deal with a class of voting schemes where at the same time the domain of possible preference preordering of any agent is limited to single-peaked preferences, and the message that this agent sends to the central authority is simply its ‘peak’ — his best preferred alternative. In this context we have shown that strategic considerations justify the central role given to the Condorcet procedure which amounts to elect the ‘median’ peak: namely all strategy-proof anonymous and efficient voting schemes can be derived from the Condorcet procedure by simply adding some fixed ballots to the agent's ballots (with the only restriction that the number of fixed ballots is strictly less than the number of agents).", "We study heterogeneous k -facility location games on a real line segment. In this model there are k facilities to be placed on a line segment where each facility serves a different purpose. Thus, the preferences of the agents over the facilities can vary arbitrarily. Our goal is to design strategy proof mechanisms that locate the facilities in a way to maximize the minimum utility among the agents. For @math , if the agents' locations are known, we prove that the mechanism that locates the facility on an optimal location is strategy proof. For @math , we prove that there is no optimal strategy proof mechanism, deterministic or randomized, even when @math and there are only two agents with known locations. We derive inapproximability bounds for deterministic and randomized strategy proof mechanisms. Finally, we provide strategy proof mechanisms that achieve constant approximation. All of our mechanisms are simple and communication efficient. As a byproduct we show that some of our mechanisms can be used to achieve constant factor approximations for other objectives as the social welfare and the happiness.", "We study strategyproof (SP) mechanisms for the location of a facility on a discrete graph. We give a full characterization of SP mechanisms on lines and on sufficiently large cycles. Interestingly, the characterization deviates from the one given by Schummer and Vohra (2004) for the continuous case. In particular, it is shown that an SP mechanism on a cycle is close to dictatorial, but all agents can affect the outcome, in contrast to the continuous case. Our characterization is also used to derive a lower bound on the approximation ratio with respect to the social cost that can be achieved by an SP mechanism on certain graphs. Finally, we show how the representation of such graphs as subsets of the binary cube reveals common properties of SP mechanisms and enables one to extend the lower bound to related domains.", "Facility location is a well-studied problem in social choice literature, where agents' preferences are restricted to be single-peaked. When the number of agents is treated as a variable (e.g., not observable a priori), a social choice function must be defined so that it can accept any possible number of preferences as input. Furthermore, there exist cases where multiple choices must be made continuously while agents dynamically arrive leave. Under such variable and dynamic populations, a social choice function needs to give each agent an incentive to sincerely report her existence. In this paper we investigate facility location models with variable and dynamic populations. For a static, i.e., one-shot, variable population model, we provide a necessary and sufficient condition for a social choice function to satisfy participation, as well as truthfulness, anonymity, and Pareto efficiency. The condition is given as a further restriction on the well-known median voter schemes. For a dynamic model, we first propose an online social choice function, which is optimal for the total sum of the distances between the choices in the previous and current periods, among any Pareto efficient functions. We then define a generalized class of online social choice functions and compare their performances both theoretically and experimentally.", "The study of facility location in the presence of self-interested agents has recently emerged as the benchmark problem in the research on mechanism design without money. Here we study the related problem of heterogeneous 2-facility location, that features more realistic assumptions such as: (i) multiple heterogeneous facilities have to be located, (ii) agents' locations are common knowledge and (iii) agents bid for the set of facilities they are interested in. We study the approximation ratio of both deterministic and randomized truthful algorithms when the underlying network is a line. We devise an (n - 1)-approximate deterministic truthful mechanism and prove a constant approximation lower bound. Furthermore, we devise an optimal and truthful (in expectation) randomized algorithm.", "The literature on algorithmic mechanism design is mostly concerned with game-theoretic versions of optimization problems to which standard economic money-based mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on payments. In this article, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are almost ubiquitous and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting, agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located and a domain where each agent controls multiple locations.", "", "Facility location decisions play a critical role in the strategic design of supply chain networks. In this paper, a literature review of facility location models in the context of supply chain management is given. We identify basic features that such models must capture to support decision-making involved in strategic supply chain planning. In particular, the integration of location decisions with other decisions relevant to the design of a supply chain network is discussed. Furthermore, aspects related to the structure of the supply chain network, including those specific to reverse logistics, are also addressed. Significant contributions to the current state-of-the-art are surveyed taking into account numerous factors. Supply chain performance measures and optimization techniques are also reviewed. Applications of facility location models to supply chain network design ranging across various industries are presented. Finally, a list of issues requiring further research are highlighted." ] }
1907.08914
2963176143
We consider the problem of locating a single facility on a vertex in a given graph based on agents' preferences, where the domain of the preferences is either single-peaked or single-dipped. Our main interest is the existence of deterministic social choice functions (SCFs) that are Pareto efficient and false-name-proof, i.e., resistant to fake votes. We show that regardless of whether preferences are single-peaked or single-dipped, such an SCF exists (i) for any tree graph, and (ii) for a cycle graph if and only if its length is less than six. We also show that when the preferences are single-peaked, such an SCF exists for any ladder (i.e., 2-by-m grid) graph, and does not exist for any larger hypergrid.
Over the last decade, false-name-proofness has been scrutinized in various mechanism design problems @cite_27 @cite_18 @cite_25 @cite_8 , as a refinement of truthfulness for such open and anonymous environments, as the internet. Bu @cite_19 clarified a connection between false-name-proofness and population monotonicity in general social choice. @cite_2 also addressed FNP SCFs that are associated with monetary compensation. @cite_21 considered the case of locating two homogeneous facilities. @cite_4 studied some discrete structures, but focused on randomized SCFs. One of the most similar works to this paper is @cite_17 , who also clarified the network structures under which FNP and PE SCFs exist for single-peaked preferences. One clear difference from ours is that, in their paper proposed a new class of graphs, called a ZV-line, as a generalization of path graphs. In our paper we investigate well-known existing structures, namely tree, hypergrid, and cycle graphs. ZV-line graphs contain any tree and ladder (i.e., @math -grid for arbitrary @math ), but do not cover any other graphs considered in this paper, such as larger (hyper-)grid graphs and cycle graphs with lengths over three.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_21", "@cite_19", "@cite_27", "@cite_2", "@cite_25", "@cite_17" ], "mid": [ "1788052381", "2763993194", "2128020465", "2575015897", "2140361507", "2131402800", "2251176627", "", "2963817462" ], "abstract": [ "An important aspect of mechanism design in social choice protocols and multiagent systems is to discourage insincere and manipulative behaviour. We examine the computational complexity of false-name manipulation in weighted voting games which are an important class of coalitional voting games. Weighted voting games have received increased interest in the multiagent community due to their compact representation and ability to model coalitional formation scenarios. Bachrach and Elkind in their AAMAS 2008 paper examined divide and conquer false-name manipulation in weighted voting games from the point of view of Shapley-Shubik index. We analyse the corresponding case of the Banzhaf index and check how much the Banzhaf index of a player increases or decreases if it splits up into sub-players. A pseudo-polynomial algorithm to find the optimal split is also provided. Bachrach and Elkind also mentioned manipulation via merging as an open problem. In the paper, we examine the cases where a player annexes other players or merges with them to increase their Banzhaf index or Shapley-Shubik index payoff. We characterize the computational complexity of such manipulations and provide limits to the manipulation. The annexation non-monotonicity paradox is also discovered in the case of the Banzhaf index. The results give insight into coalition formation and manipulation.", "We consider the problem of locating facilities on a discrete acyclic graph, where agents’ locations are publicly known and the agents are requested to report their demands, i.e., which facilities they want to access. In this paper, we study the effect of manipulations by agents that utilize vacant vertices. Such manipulations are called rename or false-name manipulations in game theory and mechanism design literature. For locating one facility on a path, we carefully compare our model with traditional ones and clarify their differences by pointing out that some existing results in the traditional model do not carry over to our model. For locating two facilities, we analyze the existing and new mechanisms from a perspective of approximation ratio and provide non-trivial lower bounds. Finally, we introduce a new mechanism design model where richer information is available to the mechanism designer and show that under the new model false-name-proofness does not always imply population monotonicity.", "Cake cutting has been recognized as a fundamental model in fair division and several envy-free cake cutting algorithms have been proposed Recent works from the computer science field proposed novel mechanisms for cake cutting, whose approaches are based on the theory of mechanism design; these mechanisms are strategy-proof, i.e., no agent has any incentive to misrepresent her utility function, as well as envy-free. We consider a different type of manipulations; each agent might create fake identities to cheat the mechanism. Such manipulation have been called Sybils or false-name manipulations, and designing robust mechanisms against them, i.e., false-name-proof, is a challenging problem in mechanism design literature. We first show that no randomized false-name-proof cake cutting mechanism simultaneously satisfies ex-post envy-freeness and Pareto efficiency We then propose a new randomized mechanism that is optimal in terms of worst-case loss among those that satisfy false-name-proofness, ex-post envy-freeness, and a new weaker efficiency property. However, it reduces the amount of allocations for an agent exponentially with respect to the number of agents. To overcome this negative result, we provide another new cake cutting mechanism that satisfies a weaker notion of false-name-proofness, as well as ex-post envy freeness and Pareto efficiency.", "This paper considers a mechanism design problem for locating two identical facilities on an interval, in which an agent can pretend to be multiple agents. A mechanism selects a pair of locations on the interval according to the declared single-peaked preferences of agents. An agent's utility is determined by the location of the better one (typically the closer to her ideal point). This model can represent various application domains. For example, assume a company is going to release two models of its product line and performs a questionnaire survey in an online forum to determine their detailed specs. Typically, a customer will buy only one model, but she can answer multiple times by logging onto the forum under several email accounts. We first characterize possible outcomes of mechanisms that satisfy false-name-proofness, as well as some mild conditions. By extending the result, we completely characterize the class of false-name-proof mechanisms when locating two facilities on a circle. We then clarify the approximation ratios of the false-name-proof mechanisms on a line metric for the social and maximum costs.", "Matching a set of agents to a set of objects has many real applications. One well-studied framework is that of priority-based matching, in which each object is assumed to have a priority order over the agents. The Deferred Acceptance (DA) and Top-Trading-Cycle (TTC) mechanisms are the best-known strategy-proof mechanisms. However, in highly anonymous environments, the set of agents is not known a priori, and it is more natural for objects to instead have priorities over characteristics (e.g., the student's GPA or home address). In this paper, we extend the model so that each agent reports not only its preferences over objects, but also its characteristic. We derive results for various notions of strategy-proofness and false-name-proofness, corresponding to whether agents can only report weaker characteristics or also incomparable or stronger ones, and whether agents can only claim objects allocated to their true accounts or also those allocated to their fake accounts. Among other results, we show that DA and TTC satisfy a weak version of false-name-proofness. Furthermore, DA also satisfies a strong version of false-name-proofness, while TTC fails to satisfy it without an acyclicity assumption on priorities.", "We examine the effect of false-name bids on combinatorial auction protocols. False-name bids are bids submitted by a single bidder using multiple identifiers such as multiple e-mail addresses. The obtained results are summarized as follows: (1) the Vickrey–Clarke–Groves (VCG) mechanism, which is strategy-proof and Pareto efficient when there exists no false-name bid, is not falsename-proof; (2) there exists no false-name-proof combinatorial auction protocol that satisfies Pareto efficiency; (3) one sufficient condition where the VCG mechanism is false-name-proof is identified, i.e., the concavity of a surplus function over bidders.", "The class of Groves mechanisms has been attracting much attention in called social welfare maximization) and dominant strategy incentive compatibility. However, when strategic agents can create multiple fake identities and reveal more than one preference under them, a refined characteristic called false-name-proofness is required. Utilitarian efficiency and false-name-proofness are incompatible in combinatorial auctions, if we also have individual rationality as a desired condition. However, although individual rationality is strongly desirable, if participation is mandatory due to social norms or reputations, a mechanism without individual rationality can be sustained. In this paper we investigate the relationship between utilitarian efficiency and false-name-proofness in a social choice environment with monetary transfers. We show that in our modelization no mechanism simultaneously satisfies utilitarian efficiency, false-name-proofness, and individual rationality. Considering this fact, we ignore individual rationality and design various mechanisms that simultaneously satisfy the other two properties. We also compare our different mechanisms in terms of the distance to individual rationality. Finally we illustrate our mechanisms on a facility location problem.", "", "In many real-life scenarios, a group of agents needs to agree on a common action, e.g., on a location for a public facility, while there is some consistency between their preferences, e.g., all preferences are derived from a common metric space. The facility location problem models such scenarios and it is a well-studied problem in social choice. We study mechanisms for facility location on unweighted undirected graphs, which are resistant to manipulations (strategyproof, abstention-proof, and false-name-proof ) by both individuals and coalitions and are efficient (Pareto optimal). We define a family of graphs, ZV -line graphs, and show a general facility location mechanism for these graphs which satisfies all these desired properties. Moreover, we show that this mechanism can be computed in polynomial time, the mechanism is anonymous, and it can equivalently be defined as the first Pareto optimal location according to some predefined order. Our main result, the ZV -line graphs family and the mechanism we present for it, unifies the few current works in the literature of false-name-proof facility location on discrete graphs, including the preliminary (unpublished) works we are aware of. Finally, we discuss some generalizations and limitations of our result for problems of facility location on other structures." ] }
1907.08914
2963176143
We consider the problem of locating a single facility on a vertex in a given graph based on agents' preferences, where the domain of the preferences is either single-peaked or single-dipped. Our main interest is the existence of deterministic social choice functions (SCFs) that are Pareto efficient and false-name-proof, i.e., resistant to fake votes. We show that regardless of whether preferences are single-peaked or single-dipped, such an SCF exists (i) for any tree graph, and (ii) for a cycle graph if and only if its length is less than six. We also show that when the preferences are single-peaked, such an SCF exists for any ladder (i.e., 2-by-m grid) graph, and does not exist for any larger hypergrid.
Locating a public bad has also been widely studied in both economics and computer science fields. Manjunath @cite_24 characterized truthful SCFs on an interval. @cite_23 studied the model for locating two public bads. Feigenbaum and Sethuraman @cite_29 considered the cases where single-peaked and single-dipped preferences coexist. Nevertheless, all of these works just focused on truthful SCFs. To the best of our knowledge, this paper is the very first work on FNP facility location with single-dipped preferences.
{ "cite_N": [ "@cite_24", "@cite_29", "@cite_23" ], "mid": [ "1965412835", "", "2478262890" ], "abstract": [ "We study the problem of locating a single public good along a segment when agents have single-dipped preferences. We ask whether there are unanimous and strategy-proof rules for this model. The answer is positive and we characterize all such rules. We generalize our model to allow the set of alternatives to be unbounded. If the set of alternatives does not have a maximal and a minimal element, there is no meaningful notion of efficiency. However, we show that the range of every strategy-proof rule has a maximal and a minimal element. We then characterize all strategy-proof rules. Copyright Springer-Verlag Berlin Heidelberg 2014", "", "We consider the joint decision of placing public bads in each of two neighboring countries, modeled by two adjacent line segments. Residents of the two countries have single-dipped preferences, determined by the distance of their dips to the nearer public bad (myopic preferences) or, lexicographically, by the distance to the nearer and the other public bad (lexmin preferences). A (social choice) rule takes a profile of reported preferences as input and assigns the location of the public bad in each country. For the case of myopic preferences, all rules satisfying strategy-proofness, country-wise Pareto optimality, non-corruptibility, and the far away condition are characterized. These rules pick only border locations. The same holds for lexmin preferences under strategy-proofness and country-wise Pareto optimality alone." ] }
1907.08937
2963694998
We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural network. Although computing the exact similarity is in-tractable, we provide a sampling-based method to get a good approximation. We empirically show the outputs of our approach significantly correlate with human judgments. By applying our method to various tasks, we also find that (1) our approach could effectively detect redundant relations extracted by open information extraction (Open IE) models, that (2) even the most competitive models for relational classification still make mistakes among very similar relations, and that (3) our approach could be incorporated into negative sampling and softmax classification to alleviate these mistakes. The source code and experiment details of this paper can be obtained from this https URL.
As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity @cite_3 @cite_25 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, explicitly defining these relations and further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity @cite_1 @cite_17 @cite_48 @cite_41 @cite_34 @cite_19 @cite_40 .
{ "cite_N": [ "@cite_41", "@cite_48", "@cite_1", "@cite_3", "@cite_19", "@cite_40", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "35945896", "620279967", "1654905138", "2103318667", "2153579005", "1614298861", "36087583", "1659833910", "2109830295" ], "abstract": [ "This paper describes the Duluth systems that participated in Task 2 of SemEval-2012. These systems were unsupervised and relied on variations of the Gloss Vector measure found in the freely available software package WordNet:: Similarity. This method was moderately successful for the Class-Inclusion, Similar, Contrast, and Non-Attribute categories of semantic relations, but mimicked a random baseline for the other six categories.", "In this work, we study the problem of measuring relational similarity between two word pairs (e.g., silverware:fork and clothing:shirt). Due to the large number of possible relations, we argue that it is important to combine multiple models based on heterogeneous information sources. Our overall system consists of two novel general-purpose relational similarity models and three specific word relation models. When evaluated in the setting of a recently proposed SemEval-2012 task, our approach outperforms the previous best system substantially, achieving a 54.1 relative increase in Spearman’s rank correlation.", "This paper introduces Latent Relational Analysis (LRA), a method for measuring semantic similarity. LRA measures similarity in the semantic relations between two pairs of words. When two pairs have a high degree of relational similarity, they are analogous. For example, the pair cat:meow is analogous to the pair dog:bark. There is evidence from cognitive science that relational similarity is fundamental to many cognitive and linguistic tasks (e.g., analogical reasoning). In the Vector Space Model (VSM) approach to measuring relational similarity, the similarity between two pairs is calculated by the cosine of the angle between the vectors that represent the two pairs. The elements in the vectors are based on the frequencies of manually constructed patterns in a large corpus. LRA extends the VSM approach in three ways: (1) patterns are derived automatically from the corpus, (2) Singular Value Decomposition is used to smooth the frequency data, and (3) synonyms are used to reformulate word pairs. This paper describes the LRA algorithm and experimentally compares LRA to VSM on two tasks, answering college-level multiple-choice word analogy questions and classifying semantic relations in noun-modifier expressions. LRA achieves state-of-the-art results, reaching human-level performance on the analogy questions and significantly exceeding VSM performance on both tasks.", "Abstract The relationship between semantic and contextual similarity is investigated for pairs of nouns that vary from high to low semantic similarity. Semantic similarity is estimated by subjective ratings; contextual similarity is estimated by the method of sorting sentential contexts. The results show an inverse linear relationship between similarity of meaning and the discriminability of contexts. This relation, is obtained for two separate corpora of sentence contexts. It is concluded that, on average, for words in the same language drawn from the same syntactic and semantic categories, the more often two words can be substituted into the same contexts the more similar in meaning they are judged to be.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "", "In this paper we present our approach for assigning degrees of relational similarity to pairs of words in the SemEval-2012 Task 2. To measure relational similarity we employed lexical patterns that can match against word pairs within a large corpus of 12 million documents. Patterns are weighted by obtaining statistically estimated lower bounds on their precision for extracting word pairs from a given relation. Finally, word pairs are ranked based on a model predicting the probability that they belong to the relation of interest. This approach achieved the best results on the SemEval 2012 Task 2, obtaining a Spearman correlation of 0.229 and an accuracy on reproducing human answers to MaxDiff questions of 39.4 .", "This article presents a measure of semantic similarity in an IS-A taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their effectiveness.", "There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM." ] }
1907.08823
2963452950
This paper augments the reward received by a reinforcement learning agent with potential functions in order to help the agent learn (possibly stochastic) optimal policies. We show that a potential-based reward shaping scheme is able to preserve optimality of stochastic policies, and demonstrate that the ability of an agent to learn an optimal policy is not affected when this scheme is augmented to soft Q-learning. We propose a method to impart potential based advice schemes to policy gradient algorithms. An algorithm that considers an advantage actor-critic architecture augmented with this scheme is proposed, and we give guarantees on its convergence. Finally, we evaluate our approach on a puddle-jump grid world with indistinguishable states, and the continuous state and action mountain car environment from classical control. Our results indicate that these schemes allow the agent to learn a stochastic optimal policy faster and obtain a higher average reward.
Entropy regularization as a way to encourage exploration of policies during the early stages of learning was studied in @cite_14 and @cite_20 . This was used to lead a policy towards states with a high reward in levine2013guided and @cite_26 .
{ "cite_N": [ "@cite_14", "@cite_20", "@cite_26" ], "mid": [ "1993411524", "2964043796", "2964161785" ], "abstract": [ "Any non-associative reinforcement learning algorithm can be viewed as a method for performing function optimization through (possibly noise-corrupted) sampling of function values. We describe the results of simulations in which the optima of several deterministic functions studied by Ackley were sought using variants of REINFORCE algorithms. Some of the algorithms used here incorporated additional heuristic features resembling certain aspects of some of the algorithms used in Ackley's studies. Differing levels of performance were achieved by the various algorithms investigated, but a number of them performed at a level comparable to the best found in Ackley's studies on a number of the tasks, in spite of their simplicity. One of these variants, called REINFORCE MENT, represents a novel but principled approach to reinforcement learning in nontrivial networks which incorporates an entropy maximization strategy. This was found to perform especially well on more hierarchically organized tasks.", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods." ] }
1907.08823
2963452950
This paper augments the reward received by a reinforcement learning agent with potential functions in order to help the agent learn (possibly stochastic) optimal policies. We show that a potential-based reward shaping scheme is able to preserve optimality of stochastic policies, and demonstrate that the ability of an agent to learn an optimal policy is not affected when this scheme is augmented to soft Q-learning. We propose a method to impart potential based advice schemes to policy gradient algorithms. An algorithm that considers an advantage actor-critic architecture augmented with this scheme is proposed, and we give guarantees on its convergence. Finally, we evaluate our approach on a puddle-jump grid world with indistinguishable states, and the continuous state and action mountain car environment from classical control. Our results indicate that these schemes allow the agent to learn a stochastic optimal policy faster and obtain a higher average reward.
Static potential-based functions were shown to preserve the optimality of deterministic policies in @cite_15 . This property was extended to dynamic potential-based functions in @cite_8 . The authors of @cite_24 showed that when an agent learned a policy using Q-learning, applying PBRS at each training step was equivalent to initializing the Q-function with the potentials. They studied value-based methods, but restricted their focus to learning deterministic policies. The authors of @cite_36 demonstrated a method to transform a reward function into a potential-based function during training. The potential function in PBA was obtained using an experience filter' in @cite_0 .
{ "cite_N": [ "@cite_8", "@cite_36", "@cite_24", "@cite_0", "@cite_15" ], "mid": [ "2151382427", "2202549229", "2130750514", "2808546214", "1777239053" ], "abstract": [ "Potential-based reward shaping can significantly improve the time needed to learn an optimal policy and, in multi-agent systems, the performance of the final joint-policy. It has been proven to not alter the optimal policy of an agent learning alone or the Nash equilibria of multiple agents learning together. However, a limitation of existing proofs is the assumption that the potential of a state does not change dynamically during the learning. This assumption often is broken, especially if the reward-shaping function is generated automatically. In this paper we prove and demonstrate a method of extending potential-based reward shaping to allow dynamic shaping and maintain the guarantees of policy invariance in the single-agent case and consistent Nash equilibria in the multi-agent case.", "Effectively incorporating external advice is an important problem in reinforcement learning, especially as it moves into the real world. Potential-based reward shaping is a way to provide the agent with a specific form of additional reward, with the guarantee of policy invariance. In this work we give a novel way to incorporate an arbitrary reward function with the same guarantee, by implicitly translating it into the specific form of dynamic advice potentials, which are maintained as an auxiliary value function learnt at the same time. We show that advice provided in this way captures the input reward function in expectation, and demonstrate its efficacy empirically.", "Shaping has proven to be a powerful but precarious means of improving reinforcement learning performance. Ng, Harada, and Russell (1999) proposed the potential-based shaping algorithm for adding shaping rewards in a way that guarantees the learner will learn optimal behavior. In this note, we prove certain similarities between this shaping algorithm and the initialization step required for several reinforcement learning algorithms. More specifically, we prove that a reinforcement learner with initial Q-values based on the shaping algorithm's potential function make the same updates throughout learning as a learner receiving potential-based shaping rewards. We further prove that under a broad category of policies, the behavior of these two learners are indistinguishable. The comparison provides intuition on the theoretical properties of the shaping algorithm as well as a suggestion for a simpler method for capturing the algorithm's benefit. In addition, the equivalence raises previously unaddressed issues concerning the efficiency of learning with potential-based shaping.", "Reinforcement learning is a paradigm to model how an autonomous agent learns to maximise its cumulative reward by interacting with the environment. One challenge faced by reinforcement learning is that in many environments the reward signal is sparse, leading to slow improvement of the agent's performance in early learning episodes. Potential-based reward shaping is a technique to resolve the aforementioned issue of sparse reward by incorporating an expert's domain knowledge in the learning via a potential function. Past work on reinforcement learning from demonstration directly mapped (sub-optimal) human expert demonstration to a potential function, which can speed up reinforcement learning. In this paper we propose an introspective reinforcement learning agent that significantly speeds up the learning further. An introspective Reinforcement learning agent records its state-action decisions and experience during learning in a priority queue. Good quality decisions will be kept in the queue, while poorer decisions will be rejected. The queue is then used as demonstration to speed up reinforcement learning via reward shaping. A human expert's demonstration can be used to initialise the priority queue before the learning process starts. Experimental validations in the 4-dimensional CartPole domain and the 27-dimensional Super Mario AI domain show that our approach significantly outperforms state-of-the-art approaches to reinforcement learning from demonstration in both domains.", "" ] }
1907.08823
2963452950
This paper augments the reward received by a reinforcement learning agent with potential functions in order to help the agent learn (possibly stochastic) optimal policies. We show that a potential-based reward shaping scheme is able to preserve optimality of stochastic policies, and demonstrate that the ability of an agent to learn an optimal policy is not affected when this scheme is augmented to soft Q-learning. We propose a method to impart potential based advice schemes to policy gradient algorithms. An algorithm that considers an advantage actor-critic architecture augmented with this scheme is proposed, and we give guarantees on its convergence. Finally, we evaluate our approach on a puddle-jump grid world with indistinguishable states, and the continuous state and action mountain car environment from classical control. Our results indicate that these schemes allow the agent to learn a stochastic optimal policy faster and obtain a higher average reward.
The use of PBRS in model-based RL was studied in @cite_6 , and for episodic RL in @cite_4 . PBRS was extended to planning in partially observable domains in @cite_28 . However, these papers only considered the finite-horizon case. In comparison, we consider the infinite horizon, discounted cost setting in this paper.
{ "cite_N": [ "@cite_28", "@cite_4", "@cite_6" ], "mid": [ "2061902782", "2620974420", "" ], "abstract": [ "In this paper, we address the problem of suboptimal behavior during online partially observable Markov decision process (POMDP) planning caused by time constraints on planning. Taking inspiration from the related field of reinforcement learning (RL), our solution is to shape the agent's reward function in order to lead the agent to large future rewards without having to spend as much time explicitly estimating cumulative future rewards, enabling the agent to save time to improve the breadth planning and build higher quality plans. Specifically, we extend potential-based reward shaping (PBRS) from RL to online POMDP planning. In our extension, information about belief states is added to the function optimized by the agent during planning. This information provides hints of where the agent might find high future rewards beyond its planning horizon, and thus achieve greater cumulative rewards. We develop novel potential functions measuring information useful to agent metareasoning in POMDPs (reflecting on agent knowledge and or histories of experience with the environment), theoretically prove several important properties and benefits of using PBRS for online POMDP planning, and empirically demonstrate these results in a range of classic benchmark POMDP planning problems.", "Recent advancements in reinforcement learning confirm that reinforcement learning techniques can solve large scale problems leading to high quality autonomous decision making. It is a matter of time until we will see large scale applications of reinforcement learning in various sectors, such as healthcare and cyber-security, among others. However, reinforcement learning can be time-consuming because the learning algorithms have to determine the long term consequences of their actions using delayed feedback or rewards. Reward shaping is a method of incorporating domain knowledge into reinforcement learning so that the algorithms are guided faster towards more promising solutions. Under an overarching theme of episodic reinforcement learning, this paper shows a unifying analysis of potential-based reward shaping which leads to new theoretical insights into reward shaping in both model-free and model-based algorithms, as well as in multi-agent reinforcement learning.", "" ] }
1907.08906
2963652799
In this paper, we consider the colorful @math -center problem, which is a generalization of the well-known @math -center problem. Here, we are given red and blue points in a metric space, and a coverage requirement for each color. The goal is to find the smallest radius @math , such that with @math balls of radius @math , the desired number of points of each color can be covered. We obtain a constant approximation for this problem in the Euclidean plane. We obtain this result by combining a "pseudo-approximation" algorithm that works in any metric space, and an approximation algorithm that works for a special class of instances in the plane. The latter algorithm uses a novel connection to a certain matching problem in graphs.
The @math -means and @math -median are classic -hard clustering problems that are closely related to the @math -center problem. Like the @math -center problem, these problems have been extensively studied, resulting in different approaches guaranteeing constant factor approximations. More recently, the outlier versions of these problems were also studied; constant factor approximations were obtained for @math -median with outliers @cite_18 @cite_13 and @math -means with outliers @cite_13 . A polynomial time bicriteria @math -approximation using at most @math centers for any @math is known in low dimensional Euclidean spaces, and metric spaces with constant doubling dimension @cite_21 .
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_13" ], "mid": [ "2048974928", "2734881792", "2767218854" ], "abstract": [ "We consider the k-median clustering with outliers problem: Given a finite point set in a metric space and parameters k and m, we want to remove m points (called outliers), such that the cost of the optimal k-median clustering of the remaining points is minimized. We present the first polynomial time constant factor approximation algorithm for this problem.", "Clustering problems are well-studied in a variety of fields such as data science, operations research, and computer science. Such problems include variants of centre location problems, @math -median, and @math -means to name a few. In some cases, not all data points need to be clustered; some may be discarded for various reasons. We study clustering problems with outliers. More specifically, we look at Uncapacitated Facility Location (UFL), @math -Median, and @math -Means. In UFL with outliers, we have to open some centres, discard up to @math points of @math and assign every other point to the nearest open centre, minimizing the total assignment cost plus centre opening costs. In @math -Median and @math -Means, we have to open up to @math centres but there are no opening costs. In @math -Means, the cost of assigning @math to @math is @math . We present several results. Our main focus is on cases where @math is a doubling metric or is the shortest path metrics of graphs from a minor-closed family of graphs. For uniform-cost UFL with outliers on such metrics we show that a multiswap simple local search heuristic yields a PTAS. With a bit more work, we extend this to bicriteria approximations for the @math -Median and @math -Means problems in the same metrics where, for any constant @math , we can find a solution using @math centres whose cost is at most a @math -factor of the optimum and uses at most @math outliers. We also show that natural local search heuristics that do not violate the number of clusters and outliers for @math -Median (or @math -Means) will have unbounded gap even in Euclidean metrics. Furthermore, we show how our analysis can be extended to general metrics for @math -Means with outliers to obtain a @math bicriteria.", "In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α1 + є ≤ 7.081 + є)-approximation algorithm for k-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For k-means with outliers, we give an (α2+є ≤ 53.002 + є)-approximation, which is the first O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α1- and (α1 + є)-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to The natural LP relaxation for the k-median k-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0." ] }
1907.08906
2963652799
In this paper, we consider the colorful @math -center problem, which is a generalization of the well-known @math -center problem. Here, we are given red and blue points in a metric space, and a coverage requirement for each color. The goal is to find the smallest radius @math , such that with @math balls of radius @math , the desired number of points of each color can be covered. We obtain a constant approximation for this problem in the Euclidean plane. We obtain this result by combining a "pseudo-approximation" algorithm that works in any metric space, and an approximation algorithm that works for a special class of instances in the plane. The latter algorithm uses a novel connection to a certain matching problem in graphs.
Facility location with outliers, which is referred to as Robust Facility Location , is a generalization of the uncapacitated Facility Location problem; various constant approximations are known for the latter problem. The Robust Facility Location problem was introduced in @cite_11 , who gave a @math -approximation. The approximation guarantee was later improved by @cite_7 to @math .
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2052494364", "2003719999" ], "abstract": [ "In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(m log m) and O(n3), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.", "Facility location problems are traditionally investigated with the assumption that all the clients are to be provided service. A significant shortcoming of this formulation is that a few very distant clients, called outliers, can exert a disproportionately strong influence over the final solution. In this paper we explore a generalization of various facility location problems (K-center, K-median, uncapacitated facility location etc) to the case when only a specified fraction of the customers are to be served. What makes the problems harder is that we have to also select the subset that should get service. We provide generalizations of various approximation algorithms to deal with this added constraint." ] }
1907.08906
2963652799
In this paper, we consider the colorful @math -center problem, which is a generalization of the well-known @math -center problem. Here, we are given red and blue points in a metric space, and a coverage requirement for each color. The goal is to find the smallest radius @math , such that with @math balls of radius @math , the desired number of points of each color can be covered. We obtain a constant approximation for this problem in the Euclidean plane. We obtain this result by combining a "pseudo-approximation" algorithm that works in any metric space, and an approximation algorithm that works for a special class of instances in the plane. The latter algorithm uses a novel connection to a certain matching problem in graphs.
A colorful version of vertex cover is studied in @cite_16 , and colorful versions of the Set Cover and Facility Location-type problems were considered in @cite_19 . In these problems, the cardinality of the cover (or its weight) shows up in the objective function, unlike in @math -center, where the number of centers balls @math is a hard restriction''. These problems therefore have a different flavor.
{ "cite_N": [ "@cite_19", "@cite_16" ], "mid": [ "2890196309", "2172398637" ], "abstract": [ "Several algorithms with an approximation guarantee of @math are known for the Set Cover problem, where @math is the number of elements. We study a generalization of the Set Cover problem, called the Partition Set Cover problem. Here, the elements are partitioned into @math , and we are required to cover at least @math elements from each color class @math , using the minimum number of sets. We give a randomized LP-rounding algorithm that is an @math approximation for the Partition Set Cover problem. Here @math denotes the approximation guarantee for a related Set Cover instance obtained by rounding the standard LP. As a corollary, we obtain improved approximation guarantees for various set systems for which @math is known to be sublogarithmic in @math . We also extend the LP rounding algorithm to obtain @math approximations for similar generalizations of the Facility Location type problems. Finally, we show that many of these results are essentially tight, by showing that it is NP-hard to obtain an @math -approximation for any of these problems.", "We consider a natural generalization of the Partial Vertex Cover problem. Here an instance consists of a graph G = (V,E), a cost function c: V → ℤ + , a partition P 1, …, P r of the edge set E, and a parameter k i for each partition P i . The goal is to find a minimum cost set of vertices which cover at least k i edges from the partition P i . We call this the Partition-VC problem. In this paper, we give matching upper and lower bound on the approximability of this problem. Our algorithm is based on a novel LP relaxation for this problem. This LP relaxation is obtained by adding knapsack cover inequalities to a natural LP relaxation of the problem. We show that this LP has integrality gap of O(logr), where r is the number of sets in the partition of the edge set. We also extend our result to more general settings." ] }
1907.08906
2963652799
In this paper, we consider the colorful @math -center problem, which is a generalization of the well-known @math -center problem. Here, we are given red and blue points in a metric space, and a coverage requirement for each color. The goal is to find the smallest radius @math , such that with @math balls of radius @math , the desired number of points of each color can be covered. We obtain a constant approximation for this problem in the Euclidean plane. We obtain this result by combining a "pseudo-approximation" algorithm that works in any metric space, and an approximation algorithm that works for a special class of instances in the plane. The latter algorithm uses a novel connection to a certain matching problem in graphs.
Finally, @math -center and @math -median have been generalized in an orthogonal direction, where there are additional constraints on the centers @cite_20 @cite_0 @cite_3 . Again, the issues studied in these generalizations tend to be quite different from the ones we confront here.
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_20" ], "mid": [ "2105773506", "", "1558598144" ], "abstract": [ "In the classic k-center problem, we are given a metric graph, and the objective is to open k nodes as centers such that the maximum distance from any vertex to its closest center is minimized. In this paper, we consider two important generalizations of k-center, the matroid center problem and the knapsack center problem. Both problems are motivated by recent content distribution network applications. Our contributions can be summarized as follows: 1 We consider the matroid center problem in which the centers are required to form an independent set of a given matroid. We show this problem is NP-hard even on a line. We present a 3-approximation algorithm for the problem on general metrics. We also consider the outlier version of the problem where a given number of vertices can be excluded as the outliers from the solution. We present a 7-approximation for the outlier version. 2 We consider the (multi-)knapsack center problem in which the centers are required to satisfy one (or more) knapsack constraint(s). It is known that the knapsack center problem with a single knapsack constraint admits a 3-approximation. However, when there are at least two knapsack constraints, we show this problem is not approximable at all. To complement the hardness result, we present a polynomial time algorithm that gives a 3-approximate solution such that one knapsack constraint is satisfied and the others may be violated by at most a factor of 1+e. We also obtain a 3-approximation for the outlier version that may violate the knapsack constraint by 1+e.", "", "In a Content Distribution Network application, we have a set of servers and a set of clients to be connected to the servers. Often there are a few server types and a hard budget constraint on the number of deployed servers of each type. The simplest goal here is to deploy a set of servers subject to these budget constraints in order to minimize the sum of client connection costs. These connection costs often satisfy metricity, since they are typically proportional to the distance between a client and a server within a single autonomous system. A special case of the problem where there is only one server type is the well-studied k-median problem. In this paper, we consider the problem with two server types and call it the budgeted red-blue median problem. We show, somewhat surprisingly, that running a single-swap local search for each server type simultaneously, yields a constant factor approximation for this case. Its analysis is however quite non-trivial compared to that of the k-median problem (, 2004; Gupta and Tangwongsan, 2008). Later we show that the same algorithm yields a constant approximation for the prize-collecting version of the budgeted red-blue median problem where each client can potentially be served with an alternative cost via a different vendor. In the process, we also improve the approximation factor for the prize-collecting k-median problem from 4 (, 2001) to 3+e, which matches the current best approximation factor for the k-median problem." ] }
1907.08736
2962812134
Most of privacy protection studies for textual data focus on removing explicit sensitive identifiers. However, personal writing style, as a strong indicator of the authorship, is often neglected. Recent studies on writing style anonymization can only output numeric vectors which are difficult for the recipients to interpret. We propose a novel text generation model with the exponential mechanism for authorship anonymization. By augmenting the semantic information through a REINFORCE training reward function, the model can generate differentially-private text that has a close semantic and similar grammatical structure to the original text while removing personal traits of the writing style. It does not assume any conditioned labels or paralleled text data for training. We evaluate the performance of the proposed model on the real-life peer reviews dataset and the Yelp review dataset. The result suggests that our model outperforms the state-of-the-art on semantic preservation, authorship obfuscation, and stylometric transformation.
Recently, differential privacy has received a lot of attention in the machine learning community. The differentially-private deep learning model @cite_38 and the deep private auto-encoder @cite_26 are designed to preserve the training data privacy. Their purpose is to guarantee that publishing the trained model does not reveal the privacy of individual records. Our purpose is different. We publish the differentially-private data generated by the model, rather than the model itself. Most existing models for differentially-private data release, such as @cite_13 @cite_18 , focus on different types of data rather than text. One recent work @cite_36 aims to protect privacy in text data using the exponential mechanism. However, it releases the term frequency vectors instead of a readable text. This approach limits the utility of published data to only the applications that assume term frequency as features. In contrast, our goal is to generate differentially-private text in a natural language without compromising individual privacy.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_26", "@cite_36", "@cite_13" ], "mid": [ "2473418344", "2131621068", "2520442116", "2798768357", "2085472312" ], "abstract": [ "Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.", "Differential privacy has gained a lot of attention in recent years as a general model for the protection of personal information when used and disclosed for secondary purposes. It has also been proposed as an appropriate model for health data. In this paper we review the current literature on differential privacy and highlight important general limitations to the model and the proposed mechanisms. We then examine some practical challenges to the application of differential privacy to health data. The review concludes by identifying areas that researchers and practitioners in this area need to address to increase the adoption of differential privacy for health data.", "In recent years, deep learning has spread beyond both academia and industry with many exciting real-world applications. The development of deep learning has presented obvious privacy issues. However, there has been lack of scientific study about privacy preservation in deep learning. In this paper, we concentrate on the auto-encoder, a fundamental component in deep learning, and propose the deep private auto-encoder (dPA). Our main idea is to enforce e-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. We apply the dPA to human behavior prediction in a health social network. Theoretical analysis and thorough experimental evaluations show that the dPA is highly effective and efficient, and it significantly outperforms existing solutions.", "Text mining and information retrieval techniques have been developed to assist us with analyzing, organizing and retrieving documents with the help of computers. In many cases, it is desirable that the authors of such documents remain anonymous: Search logs can reveal sensitive details about a user, critical articles or messages about a company or government might have severe or fatal consequences for a critic, and negative feedback in customer surveys might negatively impact business relations if they are identified. Simply removing personally identifying information from a document is, however, insufficient to protect the writer's identity: Given some reference texts of suspect authors, so-called authorship attribution methods can reidentfy the author from the text itself. One of the most prominent models to represent documents in many common text mining and information retrieval tasks is the vector space model where each document is represented as a vector, typically containing its term frequencies or related quantities. We therefore propose an automated text anonymization approach that produces synthetic term frequency vectors for the input documents that can be used in lieu of the original vectors. We evaluate our method on an exemplary text classification task and demonstrate that it only has a low impact on its accuracy. In contrast, we show that our method strongly affects authorship attribution techniques to the level that they become infeasible with a much stronger decline in accuracy. Other than previous authorship obfuscation methods, our approach is the first that fulfills differential privacy and hence comes with a provable plausible deniability guarantee.", "With the increasing prevalence of information networks, research on privacy-preserving network data publishing has received substantial attention recently. There are two streams of relevant research, targeting different privacy requirements. A large body of existing works focus on preventing node re-identification against adversaries with structural background knowledge, while some other studies aim to thwart edge disclosure. In general, the line of research on preventing edge disclosure is less fruitful, largely due to lack of a formal privacy model. The recent emergence of differential privacy has shown great promise for rigorous prevention of edge disclosure. Yet recent research indicates that differential privacy is vulnerable to data correlation, which hinders its application to network data that may be inherently correlated. In this paper, we show that differential privacy could be tuned to provide provable privacy guarantees even in the correlated setting by introducing an extra parameter, which measures the extent of correlation. We subsequently provide a holistic solution for non-interactive network data publication. First, we generate a private vertex labeling for a given network dataset to make the corresponding adjacency matrix form dense clusters. Next, we adaptively identify dense regions of the adjacency matrix by a data-dependent partitioning process. Finally, we reconstruct a noisy adjacency matrix by a novel use of the exponential mechanism. To our best knowledge, this is the first work providing a practical solution for publishing real-life network data via differential privacy. Extensive experiments demonstrate that our approach performs well on different types of real-life network datasets." ] }
1907.08736
2962812134
Most of privacy protection studies for textual data focus on removing explicit sensitive identifiers. However, personal writing style, as a strong indicator of the authorship, is often neglected. Recent studies on writing style anonymization can only output numeric vectors which are difficult for the recipients to interpret. We propose a novel text generation model with the exponential mechanism for authorship anonymization. By augmenting the semantic information through a REINFORCE training reward function, the model can generate differentially-private text that has a close semantic and similar grammatical structure to the original text while removing personal traits of the writing style. It does not assume any conditioned labels or paralleled text data for training. We evaluate the performance of the proposed model on the real-life peer reviews dataset and the Yelp review dataset. The result suggests that our model outperforms the state-of-the-art on semantic preservation, authorship obfuscation, and stylometric transformation.
Text generation is a trending topic in machine learning. It aims at generating a text sample with changed attributes. propose a model to change the degree of politeness while generating text. combine the with to generate a sentence with different sentiment and tense. A4NT @cite_4 is able to control the gender and age attribute of the generated text data through a GAN model. Most of the literature on this direction name different attributes, such as sentiment and tense, as style. However, these attributes correspond more to the content itself, rather than the personal writing style. Our focus is different.
{ "cite_N": [ "@cite_4" ], "mid": [ "2767368215" ], "abstract": [ "Text-based analysis methods allow to reveal privacy relevant author attributes such as gender, age and identify of the text's author. Such methods can compromise the privacy of an anonymous author even when the author tries to remove privacy sensitive content. In this paper, we propose an automatic method, called Adversarial Author Attribute Anonymity Neural Translation ( @math ), to combat such text-based adversaries. We combine sequence-to-sequence language models used in machine translation and generative adversarial networks to obfuscate author attributes. Unlike machine translation techniques which need paired data, our method can be trained on unpaired corpora of text containing different authors. Importantly, we propose and evaluate techniques to impose constraints on our @math to preserve the semantics of the input text. @math learns to make minimal changes to the input text to successfully fool author attribute classifiers, while aiming to maintain the meaning of the input. We show through experiments on two different datasets and three settings that our proposed method is effective in fooling the author attribute classifiers and thereby improving the anonymity of authors." ] }