aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.08302
2964151964
With the demand to process ever-growing data volumes, a variety of new data stream processing frameworks have been developed. Moving an implementation from one such system to another, e.g., for performance reasons, requires adapting existing applications to new interfaces. Apache Beam addresses these high substitution costs by providing an abstraction layer that enables executing programs on any of the supported streaming frameworks. In this paper, we present a novel benchmark architecture for comparing the performance impact of using Apache Beam on three streaming frameworks: Apache Spark Streaming, Apache Flink, and Apache Apex. We find significant performance penalties when using Apache Beam for application development in the surveyed systems. Overall, usage of Apache Beam for the examined streaming applications caused a high variance of query execution times with a slowdown of up to a factor of 58 compared to queries developed without the abstraction layer. All developed benchmark artifacts are publicly available to ensure reproducible results.
The mentioned data sender emits data to the DSPS, which is mostly car position reports. Depending on the overall situation on the expressways, car position reports may require the DSPS to create an output or not. Next to car position reports, the remaining input data represent an explicit query which always requires an answer. Linear Road defines four distinct queries, whereas the query lastly presented in @cite_42 was skipped in the two presented implementations due to complexity reasons.
{ "cite_N": [ "@cite_42" ], "mid": [ "2112215401" ], "abstract": [ "This paper specifies the Linear Road Benchmark for Stream Data Management Systems (SDMS). Stream Data Management Systems process streaming data by executing continuous and historical queries while producing query results in real-time. This benchmark makes it possible to compare the performance characteristics of SDMS' relative to each other and to alternative (e.g., Relational Database) systems. Linear Road has been endorsed as an SDMS benchmark by the developers of both the Aurora [1] (out of Brandeis University, Brown University and MIT) and STREAM [8] (out of Stanford University) stream systems. Linear Road simulates a toll system for the motor vehicle expressways of a large metropolitan area. The tolling system uses \"variable tolling\" [6, 11, 9]: an increasingly prevalent tolling technique that uses such dynamic factors as traffic congestion and accident proximity to calculate toll charges. Linear Road specifies a variable tolling system for a fictional urban area including such features as accident detection and alerts, traffic congestion measurements, toll calculations and historical queries. After specifying the benchmark, we describe experimental results involving two implementations: one using a commercially available Relational Database and the other using Aurora. Our results show that a dedicated Stream Data Management System can outperform a Relational Database by at least a factor of 5 on streaming data applications." ] }
1907.08302
2964151964
With the demand to process ever-growing data volumes, a variety of new data stream processing frameworks have been developed. Moving an implementation from one such system to another, e.g., for performance reasons, requires adapting existing applications to new interfaces. Apache Beam addresses these high substitution costs by providing an abstraction layer that enables executing programs on any of the supported streaming frameworks. In this paper, we present a novel benchmark architecture for comparing the performance impact of using Apache Beam on three streaming frameworks: Apache Spark Streaming, Apache Flink, and Apache Apex. We find significant performance penalties when using Apache Beam for application development in the surveyed systems. Overall, usage of Apache Beam for the examined streaming applications caused a high variance of query execution times with a slowdown of up to a factor of 58 compared to queries developed without the abstraction layer. All developed benchmark artifacts are publicly available to ensure reproducible results.
The benchmark result for a system is summarized as a so called L-rating. This metric defined by Linear Road expresses how many expressways the system could handle while meeting the defined response time requirements for each query. A higher number of expressways corresponds to a higher data input rate for the SUT. When generating data, the amount of expressways can be configured.s The Linear Road benchmark was applied to the DSPS Aurora @cite_24 and a commercial relational database. Results are presented in the paper.
{ "cite_N": [ "@cite_24" ], "mid": [ "2149576945" ], "abstract": [ "Abstract.This paper describes the basic processing model and architecture of Aurora, a new system to manage data streams for monitoring applications. Monitoring applications differ substantially from conventional business data processing. The fact that a software system must process and react to continual inputs from many sources (e.g., sensors) rather than from human operators requires one to rethink the fundamental architecture of a DBMS for this application area. In this paper, we present Aurora, a new DBMS currently under construction at Brandeis University, Brown University, and M.I.T. We first provide an overview of the basic Aurora model and architecture and then describe in detail a stream-oriented set of operators." ] }
1907.08302
2964151964
With the demand to process ever-growing data volumes, a variety of new data stream processing frameworks have been developed. Moving an implementation from one such system to another, e.g., for performance reasons, requires adapting existing applications to new interfaces. Apache Beam addresses these high substitution costs by providing an abstraction layer that enables executing programs on any of the supported streaming frameworks. In this paper, we present a novel benchmark architecture for comparing the performance impact of using Apache Beam on three streaming frameworks: Apache Spark Streaming, Apache Flink, and Apache Apex. We find significant performance penalties when using Apache Beam for application development in the surveyed systems. Overall, usage of Apache Beam for the examined streaming applications caused a high variance of query execution times with a slowdown of up to a factor of 58 compared to queries developed without the abstraction layer. All developed benchmark artifacts are publicly available to ensure reproducible results.
The work presented in @cite_11 compares Apache Flink and Apache Spark. The conducted measurements include different queries, a grep query being one of them. One focus area that is analyzed is the scaling behavior with regard to different numbers of nodes in the cluster. However, studying both systems from a data stream processing point of view is out of scope in the performed measurements.
{ "cite_N": [ "@cite_11" ], "mid": [ "2498111289" ], "abstract": [ "Big Data analytics has recently gained increasing popularity as a tool to process large amounts of data on-demand. Spark and Flink are two Apache-hosted data analytics frameworks that facilitate the development of multi-step data pipelines using directly acyclic graph patterns. Making the most out of these frameworks is challenging because efficient executions strongly rely on complex parameter configurations and on an in-depth understanding of the underlyingarchitectural choices. Although extensive research has been devoted to improving and evaluating the performance of such analytics frameworks, most of them benchmarkthe platforms against Hadoop, as a baseline, a rather unfair comparison consideringthe fundamentally different design principles. This paper aims to bring some justice in this respect, by directly evaluating the performance of Sparkand Flink. Our goal is to identify and explain the impact of the different architecturalchoices and the parameter configurations on the perceived end-to-end performance. To this end, we develop a methodology for correlating the parameter settings and the operators execution plan with the resource usage. We use this methodologyto dissect the performance of Spark and Flink with several representative batchand iterative workloads on up to 100 nodes. Our key finding is that there none of the two framework outperforms the other for all data types, sizes and job patterns. This paper performs a fine characterization of the cases when each framework is superior, and we highlight how this performance correlates to operators, to resource usage and to the specifics of the internal framework design." ] }
1907.08302
2964151964
With the demand to process ever-growing data volumes, a variety of new data stream processing frameworks have been developed. Moving an implementation from one such system to another, e.g., for performance reasons, requires adapting existing applications to new interfaces. Apache Beam addresses these high substitution costs by providing an abstraction layer that enables executing programs on any of the supported streaming frameworks. In this paper, we present a novel benchmark architecture for comparing the performance impact of using Apache Beam on three streaming frameworks: Apache Spark Streaming, Apache Flink, and Apache Apex. We find significant performance penalties when using Apache Beam for application development in the surveyed systems. Overall, usage of Apache Beam for the examined streaming applications caused a high variance of query execution times with a slowdown of up to a factor of 58 compared to queries developed without the abstraction layer. All developed benchmark artifacts are publicly available to ensure reproducible results.
@cite_20 compare Apache Storm, Apache Flink, and Apache Spark Streaming in their paper. Besides describing the architecture of these three systems, the performance is studied in a network traffic analysis scenario. Additionally, the behavior in case of a node failure is investigated.
{ "cite_N": [ "@cite_20" ], "mid": [ "2586025740" ], "abstract": [ "Distributed stream processing platforms is a new class of real-time monitoring systems that analyze and extracts knowledge from large continuous streams of data. This type of systems is crucial for providing high throughput and low latency required by Big Data or Internet of Things monitoring applications. This paper describes and analyzes three main open-source distributed stream- processing platforms: Storm Flink, and Spark Streaming. We analyze the system architectures and we compare their main features. We carry out two experiments concerning anomaly detection on network traffic to evaluate the throughput efficiency and the resilience to node failures. Results show that the performance of native stream processing systems, Storm and Flink, is up to 15 times higher than the micro-batch processing system, Spark Streaming. On the other hand, Spark Streaming is more robust to node failures and provides recovery without losses." ] }
1902.00126
2915013032
We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm @math convergence rate for general convex objectives and @math convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factors, even without constraints. We demonstrate the performance of our algorithm with numerical experiments on basis pursuit, a hard margin support vector machines and a portfolio optimization and show that our algorithm achieves state-of-the-art practical performance.
The most prominent work for stochastic optimization problems is stochastic gradient descent (SGD) @cite_17 @cite_13 @cite_5 . Even though SGD is very well studied, it only applies when there does not exist any constraints in the problem template . For the case of simple constraints, @math @math in and almost sure constraints are not present, projected SGD can be used @cite_17 . However, it requires @math to be a projectable set, which does not apply to the general template of which would involve the almost sure constraints in the definition of @math . In the case where @math in is a nonsmooth proximable function @cite_22 studied the convergence of stochastic proximal gradient (SPG) method which utilizes stochastic gradients of @math in addition to the proximal operator of @math . This method generalize projected SGD, however, they cannot handle infinitely many constraints that we consider in since it is not possible to project onto their intersection in general.
{ "cite_N": [ "@cite_5", "@cite_13", "@cite_22", "@cite_17" ], "mid": [ "2086161653", "2156779765", "1658113598", "1992208280" ], "abstract": [ "A new recursive algorithm of stochastic approximation type with the averaging of trajectories is investigated. Convergence with probability one is proved for a variety of classical optimization and identification problems. It is also demonstrated for these problems that the proposed algorithm achieves the highest possible rate of convergence.", "We consider the minimization of a convex objective function defined on a Hilbert space, which is only available through unbiased estimates of its gradients. This problem includes standard machine learning algorithms such as kernel logistic regression and least-squares regression, and is commonly referred to as a stochastic approximation problem in the operations research community. We provide a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent (a.k.a. Robbins-Monro algorithm) as well as a simple modification where iterates are averaged (a.k.a. Polyak-Ruppert averaging). Our analysis suggests that a learning rate proportional to the inverse of the number of iterations, while leading to the optimal convergence rate in the strongly convex case, is not robust to the lack of strong convexity or the setting of the proportionality constant. This situation is remedied when using slower decays together with averaging, robustly leading to the optimal rate of convergence. We illustrate our theoretical results with simulations on synthetic and standard datasets.", "We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive @math non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.", "In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments." ] }
1902.00126
2915013032
We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm @math convergence rate for general convex objectives and @math convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factors, even without constraints. We demonstrate the performance of our algorithm with numerical experiments on basis pursuit, a hard margin support vector machines and a portfolio optimization and show that our algorithm achieves state-of-the-art practical performance.
A line of work that is known as alternating projections, focus on applying random projections for solving problems that are involving the intersection of infinite number of sets. In particular, these methods focus on the following template Here, the feasible set @math consists of the intersection of a possibly infinite number of convex sets. The case when @math which corresponds to the convex feasibility problem is studied in @cite_19 . For this particular setting, the authors combine the smoothing technique with minibatch SGD, leading to a stochastic alternating projection algorithm having linear convergence.
{ "cite_N": [ "@cite_19" ], "mid": [ "2783403261" ], "abstract": [ "Finding a point in the intersection of a collection of closed convex sets, that is the convex feasibility problem, represents the main modeling strategy for many computational problems. In this paper we analyze new stochastic reformulations of the convex feasibility problem in order to facilitate the development of new algorithmic schemes. We also analyze the conditioning problem parameters using certain (linear) regularity assumptions on the individual convex sets. Then, we introduce a general random projection algorithmic framework, which extends to the random settings many existing projection schemes, designed for the general convex feasibility problem. Our general random projection algorithm allows to project simultaneously on several sets, thus providing great flexibility in matching the implementation of the algorithm on the parallel architecture at hand. Based on the conditioning parameters, besides the asymptotic convergence results, we also derive explicit sublinear and linear convergence rates for this general algorithmic framework." ] }
1902.00126
2915013032
We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm @math convergence rate for general convex objectives and @math convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factors, even without constraints. We demonstrate the performance of our algorithm with numerical experiments on basis pursuit, a hard margin support vector machines and a portfolio optimization and show that our algorithm achieves state-of-the-art practical performance.
Stochastic forward-backward algorithms can also be applied to solve . However, the papers introducing those very general algorithms focused on proving convergence and did not present convergence rates @cite_0 @cite_29 @cite_20 . There are some other works that focus on @cite_2 @cite_10 @cite_25 where the authors assume the number of constraints is finite, which is more restricted than our setting.
{ "cite_N": [ "@cite_29", "@cite_0", "@cite_2", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2727154576", "2345322282", "2197814125", "2160374150", "2749843017", "2903572344" ], "abstract": [ "A stochastic Forward-Backward algorithm with a constant step is studied. At each time step, this algorithm involves an independent copy of a couple of random maximal monotone operators. Defining a mean operator as a selection integral, the differential inclusion built from the sum of the two mean operators is considered. As a first result, it is shown that the interpolated process obtained from the iterates converges narrowly in the small step regime to the solution of this differential inclusion. In order to control the long term behavior of the iterates, a stability result is needed in addition. To this end, the sequence of the iterates is seen as a homogeneous Feller Markov chain whose transition kernel is parameterized by the algorithm step size. The cluster points of the Markov chains invariant measures in the small step regime are invariant for the semiflow induced by the differential inclusion. Conclusions regarding the long run behavior of the iterates for small steps are drawn. It is shown that when the sum of the mean operators is demipositive, the probabilities that the iterates are away from the set of zeros of this sum are small in Ces aro mean. The ergodic behavior of these iterates is studied as well. Applications of the proposed algorithm are considered. In particular, a detailed analysis of the random proximal gradient algorithm with constant step is performed.", "Maximal monotone operators are set-valued mappings which extend (but are not limited to) the notion of subdifferential of a convex function. The proximal point algorithm is a method for finding a zero of a maximal monotone operator. The algorithm consists in fixed point iterations of a mapping called the resolvent which depends on the maximal monotone operator of interest. The paper investigates a stochastic version of the algorithm where the resolvent used at iteration k is associated to one realization of a random maximal monotone operator. We establish the almost sure ergodic convergence of the iterates to a zero of the expectation (in the Aumann sense) of the latter random operator. Application to constrained stochastic optimization is considered.", "Consider convex optimization problems subject to a large number of constraints. We focus on stochastic problems in which the objective takes the form of expected values and the feasible set is the intersection of a large number of convex sets. We propose a class of algorithms that perform both stochastic gradient descent and random feasibility updates simultaneously. At every iteration, the algorithms sample a number of projection points onto a randomly selected small subsets of all constraints. Three feasibility update schemes are considered: averaging over random projected points, projecting onto the most distant sample, projecting onto a special polyhedral set constructed based on sample points. We prove the almost sure convergence of these algorithms, and analyze the iterates' feasibility error and optimality error, respectively. We provide new convergence rate benchmarks for stochastic first-order optimization with many constraints. The rate analysis and numerical experiments reveal that the algorithm using the polyhedral-set projection scheme is the most efficient one within known algorithms.", "In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1 √T) in high probability for general Lipschitz continuous objectives.", "This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. generated at each round and are disclosed to the decision maker only after the decision is made. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special cases, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. To solve this problem, this paper proposes a new algorithm that achieves @math expected regret and constraint violations and @math high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm.", "Cette these porte essentiellement sur l'etude d'algorithmes d'optimisation. Les problemes de programmation intervenant en apprentissage automatique ou en traitement du signal sont dans beaucoup de cas composites, c'est-a-dire qu'ils sont contraints ou regularises par des termes non lisses. Les methodes proximales sont une classe d'algorithmes tres efficaces pour resoudre de tels problemes. Cependant, dans les applications modernes de sciences des donnees, les fonctions a minimiser se representent souvent comme une esperance mathematique, difficile ou impossible a evaluer. C'est le cas dans les problemes d'apprentissage en ligne, dans les problemes mettant en jeu un grand nombre de donnees ou dans les problemes de calcul distribue. Pour resoudre ceux-ci, nous etudions dans cette these des methodes proximales stochastiques, qui adaptent les algorithmes proximaux aux cas de fonctions ecrites comme une esperance. Les methodes proximales stochastiques sont d'abord etudiees a pas constant, en utilisant des techniques d'approximation stochastique. Plus precisement, la methode de l'Equation Differentielle Ordinaire est adaptee au cas d'inclusions differentielles. Afin d'etablir le comportement asymptotique des algorithmes, la stabilite des suites d'iteres (vues comme des chaines de Markov) est etudiee. Ensuite, des generalisations de l'algorithme du gradient proximal stochastique a pas decroissant sont mises au point pour resoudre des problemes composites. Toutes les grandeurs qui permettent de decrire les problemes a resoudre s'ecrivent comme une esperance. Cela inclut un algorithme primal dual pour des problemes regularises et lineairement contraints ainsi qu'un algorithme d'optimisation sur les grands graphes." ] }
1902.00126
2915013032
We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm @math convergence rate for general convex objectives and @math convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factors, even without constraints. We demonstrate the performance of our algorithm with numerical experiments on basis pursuit, a hard margin support vector machines and a portfolio optimization and show that our algorithm achieves state-of-the-art practical performance.
Another related work is @cite_15 where the authors apply Nesterov's smoothing to . However, this work does not apply to , due to the Lipschitz continuous assumption on @math . Note that in our main template , @math , which is not Lipschitz continuous.
{ "cite_N": [ "@cite_15" ], "mid": [ "1807994917" ], "abstract": [ "In this work we consider the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We propose a novel algorithm called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algorithm that can achieve the optimal O(1 t) rate for minimizing nonsmooth loss functions (with strong convexity). The fast rates are confirmed by empirical comparisons, in which ANSGD significantly outperforms previous subgradient descent algorithms including SGD." ] }
1902.00297
2914814133
Higher inductive-inductive types (HIITs) generalize inductive types of dependent type theories in two ways. On the one hand they allow the simultaneous definition of multiple sorts that can be indexed over each other. On the other hand they support equality constructors, thus generalizing higher inductive types of homotopy type theory. Examples that make use of both features are the Cauchy real numbers and the well-typed syntax of type theory where conversion rules are given as equality constructors. In this paper we propose a general definition of HIITs using a small type theory, named the theory of signatures. A context in this theory encodes a HIIT by listing the constructors. We also compute notions of induction and recursion for HIITs, by using variants of syntactic logical relation translations. Building full categorical semantics and constructing initial algebras is left for future work. The theory of HIIT signatures was formalised in Agda together with the syntactic translations. We also provide a Haskell implementation, which takes signatures as input and outputs translation results as valid Agda code.
The article of @cite_12 gives specification and semantics of QIITs in a set-truncated setting. Signatures are given as lists of functors which can be interpreted as complete categories of algebras, and completeness is used to talk about notions of induction and recursion. However, no strict positivity restriction is given, nor a construction of initial algebras.
{ "cite_N": [ "@cite_12" ], "mid": [ "2963983532" ], "abstract": [ "Higher inductive types (HITs) in Homotopy Type Theory allow the definition of datatypes which have constructors for equalities over the defined type. HITs generalise quotient types, and allow to define types with non-trivial higher equality types, such as spheres, suspensions and the torus. However, there are also interesting uses of HITs to define types satisfying uniqueness of equality proofs, such as the Cauchy reals, the partiality monad, and the well-typed syntax of type theory. In each of these examples we define several types that depend on each other mutually, i.e. they are inductive-inductive definitions. We call those HITs quotient inductive-inductive types (QIITs). Although there has been recent progress on a general theory of HITs, there is not yet a theoretical foundation for the combination of equality constructors and induction-induction, despite many interesting applications. In the present paper we present a first step towards a semantic definition of QIITs. In particular, we give an initial-algebra semantics. We further derive a section induction principle, stating that every algebra morphism into the algebra in question has a section, which is close to the intuitively expected elimination rules." ] }
1902.00297
2914814133
Higher inductive-inductive types (HIITs) generalize inductive types of dependent type theories in two ways. On the one hand they allow the simultaneous definition of multiple sorts that can be indexed over each other. On the other hand they support equality constructors, thus generalizing higher inductive types of homotopy type theory. Examples that make use of both features are the Cauchy real numbers and the well-typed syntax of type theory where conversion rules are given as equality constructors. In this paper we propose a general definition of HIITs using a small type theory, named the theory of signatures. A context in this theory encodes a HIIT by listing the constructors. We also compute notions of induction and recursion for HIITs, by using variants of syntactic logical relation translations. Building full categorical semantics and constructing initial algebras is left for future work. The theory of HIIT signatures was formalised in Agda together with the syntactic translations. We also provide a Haskell implementation, which takes signatures as input and outputs translation results as valid Agda code.
Closely related to the current work is the paper by the current authors and Altenkirch @cite_1 , which also concerns QIITs. There, signatures for QIITs are essentially a restriction of the signatures given here, but in contrast to the current work, the restricted quotient setting enables building initial algebras and detailed categorical semantics.
{ "cite_N": [ "@cite_1" ], "mid": [ "2900009437" ], "abstract": [ "Quotient inductive-inductive types (QIITs) generalise inductive types in two ways: a QIIT can have more than one sort and the later sorts can be indexed over the previous ones. In addition, equality constructors are also allowed. We work in a setting with uniqueness of identity proofs, hence we use the term QIIT instead of higher inductive-inductive type. An example of a QIIT is the well-typed (intrinsic) syntax of type theory quotiented by conversion. In this paper first we specify finitary QIITs using a domain-specific type theory which we call the theory of signatures. The syntax of the theory of signatures is given by a QIIT as well. Then, using this syntax we show that all specified QIITs exist and they have a dependent elimination principle. We also show that algebras of a signature form a category with families (CwF) and use the internal language of this CwF to show that dependent elimination is equivalent to initiality." ] }
1902.00297
2914814133
Higher inductive-inductive types (HIITs) generalize inductive types of dependent type theories in two ways. On the one hand they allow the simultaneous definition of multiple sorts that can be indexed over each other. On the other hand they support equality constructors, thus generalizing higher inductive types of homotopy type theory. Examples that make use of both features are the Cauchy real numbers and the well-typed syntax of type theory where conversion rules are given as equality constructors. In this paper we propose a general definition of HIITs using a small type theory, named the theory of signatures. A context in this theory encodes a HIIT by listing the constructors. We also compute notions of induction and recursion for HIITs, by using variants of syntactic logical relation translations. Building full categorical semantics and constructing initial algebras is left for future work. The theory of HIIT signatures was formalised in Agda together with the syntactic translations. We also provide a Haskell implementation, which takes signatures as input and outputs translation results as valid Agda code.
The logical predicate syntactic translation was introduced by @cite_13 . The idea that a context can be seen as a signatures and the logical predicate translation can be used to derive the types of induction motives and methods was described in [Section 5.3] ttintt . Logical relations are used to derive the computation rules in [Section 4.3] kaposi-phd , but only for closed QIITs. Syntactic translations in the context of the calculus of inductive constructions are discussed in @cite_16 . Logical relations and parametricity can also be used to justify the existence of inductive types in a type theory with an impredicative universe @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_16", "@cite_13" ], "mid": [ "2042204873", "2565502105", "2019626268" ], "abstract": [ "Reynolds' theory of relational parametricity captures the invariance of polymorphically typed programs under change of data representation. Reynolds' original work exploited the typing discipline of the polymorphically typed lambda-calculus System F, but there is now considerable interest in extending relational parametricity to type systems that are richer and more expressive than that of System F. This paper constructs parametric models of predicative and impredicative dependent type theory. The significance of our models is twofold. Firstly, in the impredicative variant we are able to deduce the existence of initial algebras for all indexed=functors. To our knowledge, ours is the first account of parametricity for dependent types that is able to lift the useful deduction of the existence of initial algebras in parametric models of System F to the dependently typed setting. Secondly, our models offer conceptual clarity by uniformly expressing relational parametricity for dependent types in terms of reflexive graphs, which allows us to unify the interpretations of types and kinds, instead of taking the relational interpretation of types as a primitive notion. Expressing our model in terms of reflexive graphs ensures that it has canonical choices for the interpretations of the standard type constructors of dependent type theory, except for the interpretation of the universe of small types, where we formulate a refined interpretation tailored for relational parametricity. Moreover, our reflexive graph model opens the door to generalisations of relational parametricity, for example to higher-dimensional relational parametricity.", "A family of syntactic models for the calculus of construction with universes (CCω) is described, all of them preserving conversion of the calculus definitionally, and thus giving rise directly to a program transformation of CCω into itself. Those models are based on the remark that negative type constructors (e.g. dependent product, coinductive types or universes) are underspecified in type theory-which leaves some freedom on extra intensional specifications. The model construction can be seen as a compilation phase from a complex type theory into a simpler type theory. Such models can be used to derive (the negative part of) independence results with respect to CCω, such as functional extensionality, propositional extensionality, univalence or the fact that bisimulation on a coinductive type may not coincide with equality. They can also be used to add new principles to the theory, which we illustrate by defining a version of CCω with ad-hoc polymorphism that shows in particular that parametricity is not an implicit requirement of type theory. The correctness of some of the models program transformations have been checked in the Coq proof assistant and have been instrumented as a Coq plugin.", "Reynolds' abstraction theorem shows how a typing judgement in System F can be translated into a relational statement (in second order predicate logic) about inhabitants of the type. We (in second order predicate logic) about inhabitants of the type. We obtain a similar result for a single lambda calculus (a pure type system), in which terms, types and their relations are expressed. Working within a single system dispenses with the need for an interpretation layer, allowing for an unusually simple presentation. While the unification puts some constraints on the type system (which we spell out), the result applies to many interesting cases, including dependently-typed ones." ] }
1902.00113
2913620370
Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains. The simple approach of aggregating data from all source domains and training a single deep neural network end-to-end on all the data provides a surprisingly strong baseline that surpasses many prior published methods. In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain at runtime. Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain. This makes both components more robust, ultimately leading to our networks producing state-of-the-art performance on three DG benchmarks. Furthermore, we consider the pervasive workflow of using an ImageNet trained CNN as a fixed feature extractor for downstream recognition tasks. Using the Visual Decathlon benchmark, we demonstrate that our episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems. This shows that DG training can benefit standard practice in computer vision.
Multi-Domain Learning (MDL) MDL aims to learn several domains simultaneously using a single model @cite_7 @cite_32 @cite_27 @cite_30 . Depending on the problem, how much data is available per domain, and how similar the domains are, multi-domain learning can improve @cite_30 -- or sometimes worsen @cite_7 @cite_32 @cite_27 -- performance compared to a single model per domain. MDL is related to DG because the typical setting for DG is to assume a similar setup in that multiple source domains are provided. But that now the goal is to learn how to extract a domain-agnostic or domain-robust model from all those source domains. The most rigorous benchmark for MDL is the Visual Decathlon (VD) @cite_32 . We repurpose this benchmark for DG by training a CNN on a subset of the VD domains, and then evaluating its performance as a feature extractor on an unseen disjoint subset of them. We are the first to demonstrate DG at this scale, and in the heterogeneous label setting required for VD.
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_32", "@cite_7" ], "mid": [ "2964344823", "2962945654", "2963211188", "2581955877" ], "abstract": [ "Abstract: In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives.", "A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.", "There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.", "With the advent of large labelled datasets and high-capacity models, the performance of machine vision systems has been improving rapidly. However, the technology has still major limitations, starting from the fact that different vision problems are still solved by different models, trained from scratch or fine-tuned on the target data. The human visual system, in stark contrast, learns a universal representation for vision in the early life of an individual. This representation works well for an enormous variety of vision problems, with little or no change, with the major advantage of requiring little training data to solve any of them." ] }
1902.00113
2913620370
Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains. The simple approach of aggregating data from all source domains and training a single deep neural network end-to-end on all the data provides a surprisingly strong baseline that surpasses many prior published methods. In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain at runtime. Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain. This makes both components more robust, ultimately leading to our networks producing state-of-the-art performance on three DG benchmarks. Furthermore, we consider the pervasive workflow of using an ImageNet trained CNN as a fixed feature extractor for downstream recognition tasks. Using the Visual Decathlon benchmark, we demonstrate that our episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems. This shows that DG training can benefit standard practice in computer vision.
Neural Network Meta-Learning Learning-to-learn and meta-learning methods have resurged recently, in particular in few-shot recognition @cite_16 @cite_39 @cite_8 , and learning-to-optimize @cite_34 tasks. Despite signifiant other differences in motivation and methodological formalisations, a common feature of these methods is episodic training strategy. In the case of few-shot learning, the intuition is that while lot of source tasks and data may be available, these should be used for training in a way so as to closely simulate the testing conditions. Therefore at each learning iteration, a random subset of source tasks and instances are sampled to generate a training episode defined by a random few-shot learning task of similar data volume and cardinality as the model is expected to be tested on at runtime. Thus the model eventually sees' all the training data in aggregate, but in any given iteration, it is evaluated in a condition similar to a real testing' condition. In this paper we aim to develop an episodic training strategy to improve domain-robustness, rather than learning-to-learn. While the high-level idea of an episodic strategy is the same, the DG problem and associated episode construction details are completely different.
{ "cite_N": [ "@cite_8", "@cite_16", "@cite_34", "@cite_39" ], "mid": [ "", "2604763608", "2753160622", "2601450892" ], "abstract": [ "", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.", "Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.", "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset." ] }
1902.00197
2913476238
Monte Carlo (MC) permutation test is considered the gold standard for statistical hypothesis testing, especially when standard parametric assumptions are not clear or likely to fail. However, in modern data science settings where a large number of hypothesis tests need to be performed simultaneously, it is rarely used due to its prohibitive computational cost. In genome-wide association studies, for example, the number of hypothesis tests @math is around @math while the number of MC samples @math for each test could be greater than @math , totaling more than @math = @math samples. In this paper, we propose Adaptive MC multiple Testing (AMT) to estimate MC p-values and control false discovery rate in multiple testing. The algorithm outputs the same result as the standard full MC approach with high probability while requiring only @math samples. This sample complexity is shown to be optimal. On a Parkinson GWAS dataset, the algorithm reduces the running time from 2 months for full MC to an hour. The AMT algorithm is derived based on the theory of multi-armed bandits.
The problem of multiple testing with MC p-values has been studied in the broader statistical literature. Interesting heuristic adaptive algorithms were proposed without formal FDR guarantee @cite_29 @cite_41 ; the latter was developed via modifying Thompson sampling, another MAB algorithm. Asymptotic results were provided that the output of the adaptive algorithms will converge to the desired set of discoveries @cite_35 @cite_46 @cite_4 . Specifically, the most recent work @cite_4 provided a general result that incorporates virtually all popular multiple testing procedures. However, none of the above works provide a standard FDR control guarantee (e.g., @math ) nor an analysis of the MC sample complexity; the MC sample complexity was analyzed in another work only for the case of using Bonferroni procedure @cite_32 . In the present work, standard FDR control guarantee is provided, as well as both upper and lower bounds on the MC sample complexity, establishing the optimality of AMT .
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_41", "@cite_29", "@cite_32", "@cite_46" ], "mid": [ "2002917158", "1933042805", "1555806843", "2099082142", "1859329799", "2610835111" ], "abstract": [ "It is a common practice to use resampling methods such as the bootstrap for calculating the p-value for each test when performing large scale multiple testing. The precision of the bootstrap p-values and that of the false discovery rate (FDR) relies on the number of bootstraps used for testing each hypothesis. Clearly, the larger the number of bootstraps the better the precision. However, the required number of bootstraps can be computationally burdensome, and it multiplies the number of tests to be performed. Further adding to the computational challenge is that in some applications the calculation of the test statistic itself may require considerable computation time. As technology improves one can expect the dimension of the problem to increase as well. For instance, during the early days of microarray technology, the number of probes on a cDNA chip was less than 10,000. Now the Affymetrix chips come with over 50,000 probes per chip. Motivated by this important need, we developed a simple adaptive bootstrap methodology for large scale multiple testing, which reduces the total number of bootstrap calculations while ensuring the control of the FDR. The proposed algorithm results in a substantial reduction in the number of bootstrap samples. Based on a simulation study we found that, relative to the number of bootstraps required for the Benjamini-Hochberg (BH) procedure, the standard FDR methodology which was the proposed methodology achieved a very substantial reduction in the number of bootstraps. In some cases the new algorithm required as little as 1 6th the number of bootstraps as the conventional BH procedure. Thus, if the conventional BH procedure used 1,000 bootstraps, then the proposed method required only 160 bootstraps. This methodology has been implemented for time-course dose-response data in our software, ORIOGEN, which is available from the authors upon request.", "We are concerned with a situation in which we would like to test multiple hypotheses with tests whose p-values cannot be computed explicitly but can be approximated using Monte Carlo simulation. This scenario occurs widely in practice. We are interested in obtaining the same rejections and non-rejections as the ones obtained if the p-values for all hypotheses had been available. The present article introduces a framework for this scenario by providing a generic algorithm for a general multiple testing procedure. We establish conditions that guarantee that the rejections and non-rejections obtained through Monte Carlo simulations are identical to the ones obtained with the p-values. Our framework is applicable to a general class of step-up and step-down procedures, which includes many established multiple testing corrections such as the ones of Bonferroni, Holm, Sidak, Hochberg or Benjamini–Hochberg. Moreover, we show how to use our framework to improve algorithms available in the literature in such a way as to yield theoretical guarantees on their results. These modifications can easily be implemented in practice and lead to a particular way of reporting multiple testing results as three sets together with an error bound on their correctness, demonstrated exemplarily using a real biological dataset.", "Multiple hypothesis testing is widely used to evaluate scientific studies involving statistical tests. However, for many of these tests, p values are not available and are thus often approximated using Monte Carlo tests such as permutation tests or bootstrap tests. This article presents a simple algorithm based on Thompson Sampling to test multiple hypotheses. It works with arbitrary multiple testing procedures, in particular with step-up and step-down procedures. Its main feature is to sequentially allocate Monte Carlo effort, generating more Monte Carlo samples for tests whose decisions are so far less certain. A simulation study demonstrates that for a low computational effort, the new approach yields a higher power and a higher degree of reproducibility of its results than previously suggested methods.", "Motivation: In molecular biology, as in many other scientific fields, the scale of analyses is ever increasing. Often, complex Monte Carlo simulation is required, sometimes within a large-scale multiple testing setting. The resulting computational costs may be prohibitively high. Results: We here present MCFDR, a simple, novel algorithm for false discovery rate (FDR) modulated sequential Monte Carlo (MC) multiple hypothesis testing. The algorithm iterates between adding MC samples across tests and calculating intermediate FDR values for the collection of tests. MC sampling is stopped either by sequential MC or based on a threshold on FDR. An essential property of the algorithm is that it limits the total number of MC samples whatever the number of true null hypotheses. We show on both real and simulated data that the proposed algorithm provides large gains in computational efficiency. Availability: MCFDR is implemented in the Genomic HyperBrowser ( http: hyperbrowser.uio.no mcfdr), a web-based system for genome analysis. All input data and results are available and can be reproduced through a Galaxy Pages document at: http: hyperbrowser.uio.no mcfdr u sandve p mcfdr. Contact: geirksa@ifi.uio.no", "Multiple testing is often carried out in practice using approximated p-values obtained, for instance, via bootstrap or permutation tests. We are interested in allocating a pre-specified total number of samples (that is draws from a bootstrap distribution or permutations) to all hypotheses in order to approximate their p-values in an optimal way, in the sense that the allocation minimizes the total expected number of misclassified hypotheses. By a misclassified hypothesis we refer to a decision on single hypotheses which differs from the one obtained if all p-values were known analytically. Neither using a constant number of samples per p-value estimate nor more sophisticated approaches available in the literature guarantee the computation of an optimal allocation in the above sense. This article derives the optimal allocation of a finite total number of samples to a finite number of hypotheses tested using the Bonferroni correction. Simulation studies show that a simple sampling algorithm based on Thompson Sampling asympotically mimics this optimal allocation.", "Software packages usually report the results of statistical tests using p-values. Users often interpret these by comparing them to standard thresholds, e.g. 0.1 , 1 and 5 , which is sometimes reinforced by a star rating (***, **, *). In this article, we consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, e.g. by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals which cover [0,1] and which can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-values occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations." ] }
1902.00197
2913476238
Monte Carlo (MC) permutation test is considered the gold standard for statistical hypothesis testing, especially when standard parametric assumptions are not clear or likely to fail. However, in modern data science settings where a large number of hypothesis tests need to be performed simultaneously, it is rarely used due to its prohibitive computational cost. In genome-wide association studies, for example, the number of hypothesis tests @math is around @math while the number of MC samples @math for each test could be greater than @math , totaling more than @math = @math samples. In this paper, we propose Adaptive MC multiple Testing (AMT) to estimate MC p-values and control false discovery rate in multiple testing. The algorithm outputs the same result as the standard full MC approach with high probability while requiring only @math samples. This sample complexity is shown to be optimal. On a Parkinson GWAS dataset, the algorithm reduces the running time from 2 months for full MC to an hour. The AMT algorithm is derived based on the theory of multi-armed bandits.
There have been works on fast permutation test for GWAS @cite_8 @cite_11 @cite_10 @cite_17 ; they consider a different goal which is to accelerate the process of separately computing each MC p-value. In contrast, AMT accelerates the entire workflow of both computing MC p-values and applying BH on them, where the decision for each hypothesis also depends globally on others. The state-of-art method is the sequential Monte Carlo procedure (sMC) that is implemented in the popular GWAS package PLINK @cite_49 @cite_44 @cite_39 . For each hypothesis, it keeps MC sampling until having observed @math extreme events or hit the sampling cap @math . Then BH is applied on the set of sMC p-values. Here we note that the sMC p-values are conservative so this procedure controls FDR. sMC is discussed and thoroughly compared against in the rest of the paper.
{ "cite_N": [ "@cite_8", "@cite_17", "@cite_39", "@cite_44", "@cite_49", "@cite_10", "@cite_11" ], "mid": [ "2106077937", "1987440976", "2129707399", "2161633633", "", "2153902319", "2155177033" ], "abstract": [ "Motivation: In genome-wide association studies (GWAS) examining hundreds of thousands of genetic markers, the potentially high number of false positive findings requires statistical correction for multiple testing. Permutation tests are considered the gold standard for multiple testing correction in GWAS, because they simultaneously provide unbiased type I error control and high power. At the same time, they demand heavy computational effort, especially with large-scale datasets of modern GWAS. In recent years, the computational problem has been circumvented by using approximations to permutation tests, which, however, may be biased. Results: We have tackled the original computational problem of permutation testing in GWAS and herein present a permutation test algorithm one or more orders of magnitude faster than existing implementations, which enables efficient permutation testing on a genome-wide scale. Our algorithm does not rely on any kind of approximation and hence produces unbiased results identical to a standard permutation test. A noteworthy feature of our algorithm is a particularly effective performance when analyzing high-density marker sets. Availability: Freely available on the web at http: www.permory.org", "Resampling-based methods for multiple hypothesis testing often lead to long run times when the number of tests is large. This paper presents a simple rule that substantially reduces computation by allowing resampling to terminate early on a subset of tests. We prove that the method has a low probability of obtaining a set of rejected hypotheses different from those rejected without early stopping, and obtain error bounds for multiple hypothesis testing. Simulation shows that our approach saves more computation than other available procedures. Copyright 2012, Oxford University Press.", "Background Permutation testing is a robust and popular approach for significance testing in genomic research, which has the broad advantage of estimating significance non-parametrically, thereby safe guarding against inflated type I error rates. However, the computational efficiency remains a challenging issue that limits its wide application, particularly in genome-wide association studies (GWAS). Because of this, adaptive permutation strategies can be employed to make permutation approaches feasible. While these approaches have been used in practice, there is little research into the statistical properties of these approaches, and little guidance into the proper application of such a strategy for accurate p-value estimation at the GWAS level.", "Whole-genome association studies (WGAS) bring new computational, as well as analytic, challenges to researchers. Many existing genetic-analysis tools are not designed to handle such large data sets in a convenient manner and do not necessarily exploit the new opportunities that whole-genome data bring. To address these issues, we developed PLINK, an open-source C C++ WGAS tool set. With PLINK, large data sets comprising hundreds of thousands of markers genotyped for thousands of individuals can be rapidly manipulated and analyzed in their entirety. As well as providing tools to make the basic analytic steps computationally efficient, PLINK also supports some novel approaches to whole-genome data that take advantage of whole-genome coverage. We introduce PLINK and describe the five main domains of function: data management, summary statistics, population stratification, association analysis, and identity-by-descent estimation. In particular, we focus on the estimation and use of identity-by-state and identity-by-descent information in the context of population-based whole-genome studies. This information can be used to detect and correct for population stratification and to identify extended chromosomal segments that are shared identical by descent between very distantly related individuals. Analysis of the patterns of segmental sharing has the potential to map disease loci that contain multiple rare variants in a population-based linkage analysis.", "", "Background Large-scale genetic association studies can test hundreds of thousands of genetic markers for association with a trait. Since the genetic markers may be correlated, a Bonferroni correction is typically too stringent a correction for multiple testing. Permutation testing is a standard statistical technique for determining statistical significance when performing multiple correlated tests for genetic association. However, permutation testing for large-scale genetic association studies is computationally demanding and calls for optimized algorithms and software. PRESTO is a new software package for genetic association studies that performs fast computation of multiple-testing adjusted P-values via permutation of the trait.", "Because of rapid progress in genotyping techniques, many large-scale, genomewide disease-association studies are now under way. Typically, the disorders examined are multifactorial, and, therefore, researchers seeking association must consider interactions among loci and between loci and other factors. One of the challenges of large disease-association studies is obtaining accurate estimates of the significance of discovered associations. The linkage disequilibrium between SNPs makes the tests highly dependent, and dependency worsens when interactions are tested. The standard way of assigning significance (P value) is by a permutation test. Unfortunately, in large studies, it is prohibitively slow to compute low P values by this method. We present here a faster algorithm for accurately calculating low P values in case-control association studies. Unlike with several previous methods, we do not assume a specific distribution of the traits, given the genotypes. Our method is based on importance sampling and on accounting for the decay in linkage disequilibrium along the chromosome. The algorithm is dramatically faster than the standard permutation test. On data sets mimicking medium-to-large association studies, it speeds up computation by a factor of 5,000–100,000, sometimes reducing running times from years to minutes. Thus, our method significantly increases the problem-size range for which accurate, meaningful association results are attainable." ] }
1902.00127
2912839841
Mixed datasets consist of both numeric and categorical attributes. Various K-means-based clustering algorithms have been developed to cluster these datasets. Generally, these algorithms use random partition as a starting point, which tend to produce different clustering results in different runs. This inconsistency of clustering results may lead to unreliable inferences from the data. A few initialization algorithms have been developed to compute initial partition for mixed datasets; however, they are either computationally expensive or they do not produce consistent clustering results in different runs. In this paper, we propose, initKmix, a novel approach to find initial partition for K-means-based clustering algorithms for mixed datasets. The initKmix is based on the experimental observations that (i) some data points in a dataset remain in the same clusters created by k-means-based clustering algorithm irrespective of the choice of initial clusters, and (ii) individual attribute information can be used to create initial clusters. In initKmix method, a k-means-based clustering algorithm is run many times, in each run one of the attribute is used to produce initial partition. The clustering results of various runs are combined to produce initial partition. This initial partition is then be used as a seed to a k-means-based clustering algorithm to cluster mixed data. The initial partitions produced by initKmix are always fixed, do not change over different runs or by changing the order of the data objects. Experiments with various categorical and mixed datasets showed that initKmix produced accurate and consistent results, and outperformed random initialization and other state-of-the-art initialization methods. Experiments also showed that K-means-based clustering for mixed datasets with initKmix outperformed many state-of-the-art clustering algorithms.
K-means clustering algorithm is a popular clustering algorithm for datasets consisting of numeric attributes because of its low computational complexity @cite_10 . The complexity is linear with respect to the number of data points and scales well for large datasets. It minimizes the optimization function presented in Equation 1 iteratively,
{ "cite_N": [ "@cite_10" ], "mid": [ "2127218421" ], "abstract": [ "The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = S1, S2, * *, Sk is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special" ] }
1902.00127
2912839841
Mixed datasets consist of both numeric and categorical attributes. Various K-means-based clustering algorithms have been developed to cluster these datasets. Generally, these algorithms use random partition as a starting point, which tend to produce different clustering results in different runs. This inconsistency of clustering results may lead to unreliable inferences from the data. A few initialization algorithms have been developed to compute initial partition for mixed datasets; however, they are either computationally expensive or they do not produce consistent clustering results in different runs. In this paper, we propose, initKmix, a novel approach to find initial partition for K-means-based clustering algorithms for mixed datasets. The initKmix is based on the experimental observations that (i) some data points in a dataset remain in the same clusters created by k-means-based clustering algorithm irrespective of the choice of initial clusters, and (ii) individual attribute information can be used to create initial clusters. In initKmix method, a k-means-based clustering algorithm is run many times, in each run one of the attribute is used to produce initial partition. The clustering results of various runs are combined to produce initial partition. This initial partition is then be used as a seed to a k-means-based clustering algorithm to cluster mixed data. The initial partitions produced by initKmix are always fixed, do not change over different runs or by changing the order of the data objects. Experiments with various categorical and mixed datasets showed that initKmix produced accurate and consistent results, and outperformed random initialization and other state-of-the-art initialization methods. Experiments also showed that K-means-based clustering for mixed datasets with initKmix outperformed many state-of-the-art clustering algorithms.
k-Harmonic means clustering algorithm addresses the random initial clusters problem by using a different cost function @cite_17 for numeric datasets. K-Harmonic means clustering algorithm clusters create more stable clusters as compared to K-means clustering algorithm with random initial clusters. Ahmad and Hashmi @cite_15 combine the distance measure and the definition of cluster centres for mixed datasets suggested by Ahmad and Dey @cite_28 with k-Harmonic clustering algorithm @cite_17 to develop the k-Harmonic clustering algorithm for mixed datasets. Their method is less sensitive to the choice of initial cluster centres. The standard deviation of clustering accuracy of this method is small as compared to the random initialization method. However, it does not give the same clustering results in different runs with different initial partitions.
{ "cite_N": [ "@cite_28", "@cite_15", "@cite_17" ], "mid": [ "1990643970", "2470462496", "150776092" ], "abstract": [ "Use of traditional k-mean type algorithm is limited to numeric data. This paper presents a clustering algorithm based on k-mean paradigm that works well for data with mixed numeric and categorical features. We propose new cost function and distance measure based on co-occurrence of values. The measures also take into account the significance of an attribute towards the clustering process. We present a modified description of cluster center to overcome the numeric data only limitation of k-mean algorithm and provide a better characterization of clusters. The performance of this algorithm has been studied on real world data sets. Comparisons with other clustering algorithms illustrate the effectiveness of this approach.", "Display Omitted A K-Harmonic clustering algorithm for mixed data has been presented to reduce random initialization problem for partitional algorithms.The proposed clustering algorithm uses a distance measure developed for mixed datasets.The experiment results suggest that clustering results are quite insensitive to random initialization.The proposed algorithm performed better than other clustering algorithms for various datasets. K-means type clustering algorithms for mixed data that consists of numeric and categorical attributes suffer from cluster center initialization problem. The final clustering results depend upon the initial cluster centers. Random cluster center initialization is a popular initialization technique. However, clustering results are not consistent with different cluster center initializations. K-Harmonic means clustering algorithm tries to overcome this problem for pure numeric data. In this paper, we extend the K-Harmonic means clustering algorithm for mixed datasets. We propose a definition for a cluster center and a distance measure. These cluster centers and the distance measure are used with the cost function of K-Harmonic means clustering algorithm in the proposed algorithm. Experiments were carried out with pure categorical datasets and mixed datasets. Results suggest that the proposed clustering algorithm is quite insensitive to the cluster center initialization problem. Comparative studies with other clustering algorithms show that the proposed algorithm produce better clustering results.", "We propose a new class of center-based iterative clustering algorithms, K-Harmonic Means (KHMp), which is essentially insensitive to the initialization of the centers, demonstrated through many experiments. The insensitivity to initialization is attributed to a dynamic weighting function, which increases the importance of the data points that are far from any centers in the next iteration. The dependency of the K-Means’ and EM’s performance on the initialization of the centers has been a major problem. Many have tried to generate good initializations to solve the sensitivity problem. KHMp addresses the intrinsic problem by replacing the minimum distance from a data point to the centers, used in K-Means, by the Harmonic Averages of the distances from the data point to all centers. KHMp significantly improves the quality of clustering results comparing with both K-Means and EM. The KHMp algorithms have been implemented in both sequential and parallel languages and tested on hundreds of randomly generated datasets with different data distribution and clustering characteristics." ] }
1902.00245
2914121279
Typical recommender systems push K items at once in the result page in the form of a feed, in which the selection and the order of the items are important for user experience. In this paper, we formalize the K-item recommendation problem as taking an unordered set of candidate items as input, and exporting an ordered list of selected items as output. The goal is to maximize the overall utility, e.g. the click through rate, of the whole list. As one solution to the K-item recommendation problem under this proposition, we proposed a new ranking framework called the Evaluator-Generator framework. In this framework, the Evaluator is trained on user logs to precisely predict the expected feedback of each item by fully considering its intra-list correlations with other co-exposed items. On the other hand, the Generator will generate different sequences from which the Evaluator will choose one sequence as the final recommendation. In our experiments, both the offline analysis and the online test show the effectiveness of our proposed framework. Furthermore, we show that the offline behavior of the Evaluator is consistent with the realistic online environment.
The position bias is also an important and practical issue in RS. Click models are widely studied in Information Retrieval systems, such as the Cascade Click Model( @cite_29 ) and the Dynamic Bayesian Network( @cite_8 ). It is found that the position bias is not only related to the user's personal habit, but also related to the layout design etc( @cite_34 ). Thus click models often need to be considered case-by-case. Few of the previous works have studied the intra-list correlation and the position bias together. Set based recommendation algorithms including submodular ranking and DPP do not consider the position bias at all. In contrast, our framework has naturally addressed the position bias and intra-list correlations altogether.
{ "cite_N": [ "@cite_29", "@cite_34", "@cite_8" ], "mid": [ "1992549066", "2402441596", "2099213975" ], "abstract": [ "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A 'cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks", "Abstract With the rapid growth of web search in recent years the problem of modeling its users has started to attract more and more attention of the information retrieval community. This has several motivations. By building a model of user behavior we are essentially developing a better understanding of a user, which ultimately helps us to deliver a better search experience. A model of user behavior can also be used as a predictive device for non-observed items such as document relevance, which makes it useful for improving search result ranking. Finally, in many situations experimenting with real users is just infeasible and hence user simulations based on accurate models play an essential role in understanding the implications of algorithmic changes to search engine results or presentation changes to the search engine result page. In this survey we summarize advances in modeling user click behavior on a web search engine result page. We present simple click models as well as more complex models aimed at...", "As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance." ] }
1902.00245
2914121279
Typical recommender systems push K items at once in the result page in the form of a feed, in which the selection and the order of the items are important for user experience. In this paper, we formalize the K-item recommendation problem as taking an unordered set of candidate items as input, and exporting an ordered list of selected items as output. The goal is to maximize the overall utility, e.g. the click through rate, of the whole list. As one solution to the K-item recommendation problem under this proposition, we proposed a new ranking framework called the Evaluator-Generator framework. In this framework, the Evaluator is trained on user logs to precisely predict the expected feedback of each item by fully considering its intra-list correlations with other co-exposed items. On the other hand, the Generator will generate different sequences from which the Evaluator will choose one sequence as the final recommendation. In our experiments, both the offline analysis and the online test show the effectiveness of our proposed framework. Furthermore, we show that the offline behavior of the Evaluator is consistent with the realistic online environment.
We also borrow ideas from applying Reinforcement Learning to Combinatorial Optimization(CO) problems. The pointer network @cite_31 is proposed as neural tool for universal CO. @cite_15 further extended pointer-network by using policy gradients for learning. @cite_35 applied Q-Learning to CO problems on graphs. The aforementioned works have been focused on universal CO problems including Travelling Salesman Problem, Minimum Vertex Cover etc.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_15" ], "mid": [ "2607264901", "", "2560592986" ], "abstract": [ "The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.", "", "This paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items. These results, albeit still far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems." ] }
1902.00245
2914121279
Typical recommender systems push K items at once in the result page in the form of a feed, in which the selection and the order of the items are important for user experience. In this paper, we formalize the K-item recommendation problem as taking an unordered set of candidate items as input, and exporting an ordered list of selected items as output. The goal is to maximize the overall utility, e.g. the click through rate, of the whole list. As one solution to the K-item recommendation problem under this proposition, we proposed a new ranking framework called the Evaluator-Generator framework. In this framework, the Evaluator is trained on user logs to precisely predict the expected feedback of each item by fully considering its intra-list correlations with other co-exposed items. On the other hand, the Generator will generate different sequences from which the Evaluator will choose one sequence as the final recommendation. In our experiments, both the offline analysis and the online test show the effectiveness of our proposed framework. Furthermore, we show that the offline behavior of the Evaluator is consistent with the realistic online environment.
Some recent works have been addresses long-term rewards in recommender systems. @cite_33 also work on intra-list correlations, which applies policy gradient and Monte Carlo Tree Search(MCTS) to optimize the @math -NDCG in diversified ranking for the global optimum. Other works pursues long-term rewards in inter-list recommendations ( @cite_10 , @cite_2 ). Though @cite_10 also proposed treating a page of recommendation list as a whole, the intra-list correlations are not well modelled and analyzed in their work. Also their work is not testified on realistic online RS. In this paper, we more thoroughly investigates the general form of the intra-list correlations. We formalize the utility of the whole list as our final target, but we have also used item-level feedbacks. This has not been sufficiently studied in the above works.
{ "cite_N": [ "@cite_10", "@cite_33", "@cite_2" ], "mid": [ "2799544270", "2798694866", "2787933113" ], "abstract": [ "Recommender systems can mitigate the information overload problem by suggesting users' personalized items. In real-world recommendations such as e-commerce, a typical interaction between the system and its users is - users are recommended a page of items and provide feedback; and then the system recommends a new page of items. To effectively capture such interaction for recommendations, we need to solve two key problems - (1) how to update recommending strategy according to user's real-time feedback, and 2) how to generate a page of items with proper display, which pose tremendous challenges to traditional recommender systems. In this paper, we study the problem of page-wise recommendations aiming to address aforementioned two challenges simultaneously. In particular, we propose a principled approach to jointly generate a set of complementary items and the corresponding strategy to display them in a 2-D page; and propose a novel page-wise recommendation framework based on deep reinforcement learning, DeepPage, which can optimize a page of items with proper display based on real-time feedback from users. The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework.", "The goal of search result diversification is to select a subset of documents from the candidate set to satisfy as many different subtopics as possible. In general, it is a problem of subset selection and selecting an optimal subset of documents is NP-hard. Existing methods usually formalize the problem as ranking the documents with greedy sequential document selection. At each of the ranking position the document that can provide the largest amount of additional information is selected. It is obvious that the greedy selections inevitably produce suboptimal rankings. In this paper we propose to partially alleviate the problem with a Monte Carlo tree search (MCTS) enhanced Markov decision process (MDP), referred to as M @math Div. In M @math Div, the construction of diverse ranking is formalized as an MDP process where each action corresponds to selecting a document for one ranking position. Given an MDP state which consists of the query, selected documents, and candidates, a recurrent neural network is utilized to produce the policy function for guiding the document selection and the value function for predicting the whole ranking quality. The produced raw policy and value are then strengthened with MCTS through exploring the possible rankings at the subsequent positions, achieving a better search policy for decision-making. Experimental results based on the TREC benchmarks showed that M @math Div can significantly outperform the state-of-the-art baselines based on greedy sequential document selection, indicating the effectiveness of the exploratory decision-making mechanism in M @math Div.", "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few work consider to use user feedback other than click no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of Chinese Bing News and have shown the superior performance of our methods." ] }
1901.11260
2912134658
Many systems have to be maintained while the underlying constraints, costs and or profits change over time. Although the state of a system may evolve during time, a non-negligible transition cost is incured for transitioning from one state to another. In order to model such situations, (ICALP 2014) and (ICALP 2014) introduced a multistage model where the input is a sequence of instances (one for each time step), and the goal is to find a sequence of solutions (one for each time step) that are both (i) near optimal for each time step and (ii) as stable as possible. We focus on the multistage version of the Knapsack problem where we are given a time horizon t=1,2,...,T, and a sequence of knapsack instances I_1,I_2,...,I_T, one for each time step, defined on a set of n objects. In every time step t we have to choose a feasible knapsack S_t of I_t, which gives a knapsack profit. To measure the stability similarity of two consecutive solutions S_t and S_ t+1 , we identify the objects for which the decision, to be picked or not, remains the same in S_t and S_ t+1 , giving a transition profit. We are asked to produce a sequence of solutions S_1,S_2,...,S_T so that the total knapsack profit plus the overall transition profit is maximized. We propose a PTAS for the Multistage Knapsack problem. Then, we prove that there is no FPTAS for the problem even in the case where T=2, unless P=NP. Furthermore, we give a pseudopolynomial time algorithm for the case where the number of steps is bounded by a fixed constant and we show that otherwise the problem remains NP-hard even in the case where all the weights, profits and capacities are 0 or 1.
It is well known that for the usual Knapsack problem, in the continuous relaxation (variables in @math ), at most one variable is fractional. @cite_11 showed that this can be generalized for @math .
{ "cite_N": [ "@cite_11" ], "mid": [ "2031666601" ], "abstract": [ "Abstract We address a variant of the classical knapsack problem in which an upper bound is imposed on the number of items that can be selected. This problem arises in the solution of real-life cutting stock problems by column generation, and may be used to separate cover inequalities with small support within cutting-plane approaches to integer linear programs. We focus our attention on approximation algorithms for the problem, describing a linear-storage Polynomial Time Approximation Scheme (PTAS) and a dynamic-programming based Fully Polynomial Time Approximation Scheme (FPTAS). The main ideas contained in our PTAS are used to derive PTAS's for the knapsack problem and its multi-dimensional generalization which improve on the previously proposed PTAS's. We finally illustrate better PTAS's and FPTAS's for the subset sum case of the problem in which profits and weights coincide." ] }
1901.11260
2912134658
Many systems have to be maintained while the underlying constraints, costs and or profits change over time. Although the state of a system may evolve during time, a non-negligible transition cost is incured for transitioning from one state to another. In order to model such situations, (ICALP 2014) and (ICALP 2014) introduced a multistage model where the input is a sequence of instances (one for each time step), and the goal is to find a sequence of solutions (one for each time step) that are both (i) near optimal for each time step and (ii) as stable as possible. We focus on the multistage version of the Knapsack problem where we are given a time horizon t=1,2,...,T, and a sequence of knapsack instances I_1,I_2,...,I_T, one for each time step, defined on a set of n objects. In every time step t we have to choose a feasible knapsack S_t of I_t, which gives a knapsack profit. To measure the stability similarity of two consecutive solutions S_t and S_ t+1 , we identify the objects for which the decision, to be picked or not, remains the same in S_t and S_ t+1 , giving a transition profit. We are asked to produce a sequence of solutions S_1,S_2,...,S_T so that the total knapsack profit plus the overall transition profit is maximized. We propose a PTAS for the Multistage Knapsack problem. Then, we prove that there is no FPTAS for the problem even in the case where T=2, unless P=NP. Furthermore, we give a pseudopolynomial time algorithm for the case where the number of steps is bounded by a fixed constant and we show that otherwise the problem remains NP-hard even in the case where all the weights, profits and capacities are 0 or 1.
@cite_11 use the result of Theorem to show that for any fixed constant @math @math admits a polynomial time approximation scheme (PTAS). Other PTASes have been presented in @cite_18 @cite_15 . Korte and Schrader @cite_6 showed that there is no FPTAS for @math unless @math .
{ "cite_N": [ "@cite_18", "@cite_15", "@cite_6", "@cite_11" ], "mid": [ "1524700731", "", "1542883217", "2031666601" ], "abstract": [ "A fully polynomial time approximation scheme (FPTAS) is presented for the classical 0-1 knapsack problem. The new approach considerably improves the necessary space requirements. The two best previously known approaches need O(n+ 1 e 3 ) and O(n.1 e) space, respectively. Our new approximation scheme requires only O(n + 1 e 2 ) space while also reducing the running time.", "", "We characterize those combinatorial optimization problems which can be solved approximately by polynomially bounded algorithms. Using slight modifications of the Sahni and Ibarra and Kim algorithms for the knapsack problem we prove that there is no fast approximation scheme unless their algorithmic ideas apply. Hence we show that these algorithms are not only the origin but also prototypes for all polynomial or fully polynomial approximation schemes.", "Abstract We address a variant of the classical knapsack problem in which an upper bound is imposed on the number of items that can be selected. This problem arises in the solution of real-life cutting stock problems by column generation, and may be used to separate cover inequalities with small support within cutting-plane approaches to integer linear programs. We focus our attention on approximation algorithms for the problem, describing a linear-storage Polynomial Time Approximation Scheme (PTAS) and a dynamic-programming based Fully Polynomial Time Approximation Scheme (FPTAS). The main ideas contained in our PTAS are used to derive PTAS's for the knapsack problem and its multi-dimensional generalization which improve on the previously proposed PTAS's. We finally illustrate better PTAS's and FPTAS's for the subset sum case of the problem in which profits and weights coincide." ] }
1901.11383
2952973608
One of the most common modes of representing engineering schematics are Piping and Instrumentation diagrams (P&IDs) that describe the layout of an engineering process flow along with the interconnected process equipment. Over the years, P&ID diagrams have been manually generated, scanned and stored as image files. These files need to be digitized for purposes of inventory management and updation, and easy reference to different components of the schematics. There are several challenging vision problems associated with digitizing real world P&ID diagrams. Real world P&IDs come in several different resolutions, and often contain noisy textual information. Extraction of instrumentation information from these diagrams involves accurate detection of symbols that frequently have minute visual differences between them. Identification of pipelines that may converge and diverge at different points in the image is a further cause for concern. Due to these reasons, to the best of our knowledge, no system has been proposed for end-to-end data extraction from P&ID diagrams. However, with the advent of deep learning and the spectacular successes it has achieved in vision, we hypothesized that it is now possible to re-examine this problem armed with the latest deep learning models. To that end, we present a novel pipeline for information extraction from P&ID sheets via a combination of traditional vision techniques and state-of-the-art deep learning models to identify and isolate pipeline codes, pipelines, inlets and outlets, and for detecting symbols. This is followed by association of the detected components with the appropriate pipeline. The extracted pipeline information is used to populate a tree-like data-structure for capturing the structure of the piping schematics. We evaluated proposed method on a real world dataset of P&ID sheets obtained from an oil firm and have obtained promising results.
There exists very limited work on digitizing the content of engineering diagrams to facilitate fast and efficient extraction of information. The authors @cite_14 automated the assessment of AutoCAD Drawing Exchange Format (DXF) by converting DXF file into SVG format and developing a marking algorithm of the generated SVG files. A framework for engineering drawings recognition using a case-based approach is proposed by @cite_10 where the user interactively provides an example of one type of graphic object in an engineering drawing and then system tries to learn the graphical knowledge of this type of graphic object from the example and later use this learned knowledge to recognize or search for similar graphic objects in engineering drawings. Authors of @cite_6 tried to automate the extraction of structural and connectivity information from vector-graphics-coded engineering documents. A spatial relation graph (SRG) and its partial matching method are proposed for online composite graphics representation and recognition in @cite_3 . Overall, we observed that there does not exist much work on information extraction from plant engineering diagrams.
{ "cite_N": [ "@cite_3", "@cite_14", "@cite_10", "@cite_6" ], "mid": [ "2107266793", "2210190747", "1583110160", "1935107352" ], "abstract": [ "A spatial relation graph (SRG) and its partial matching method are proposed for online composite graphics representation and recognition. The SRG-based approach emphasizes three characteristics of online graphics recognition: partial, structural, and independent of stroke order and stroke number. A constrained partial permutation strategy is also proposed to reduce the computational cost of matching two SRGs, which is originally an NP-complete problem as is graph isomorphism. Experimental results show that our proposed SRG-based approach is both efficient and effective for online composite graphics recognition in our sketch-based graphics input system - SmartSketchpad.", "Assessment of student's Engineering Drawing (ED) is always tedious, repetitive and time consuming. Image processing has been the common method to convert ED to be automatically assessed. This method is tedious as algorithms need to be developed for each shape to be assessed. Our research aims to create a software application that is able to perform automatic assessment for AutoCAD Drawing Exchange Format (DXF) files for undergraduate ED course. To achieve this goal, we have explored methods to convert DXF files into SVG format and develop a marking algorithm for the generated SVG files. The result shows that it is feasible to create software that automatically assesses ED without human intervention. Future implementation would include complex real-world ED.", "In this paper, we propose a framework for engineering drawings recognition using a case-based approach. The key idea of our scheme is that, interactively, the user provides an example of one type of graphic object in an engineering drawing, then the system learns the graphical knowledge of this type of graphic object from the example and uses this learned knowledge to recognize or search for similar graphic objects in engineering drawings. The scheme emphasizes the following three distinct characteristics: automatism, run-time-ness, and robustness. We summarized five types of geometric constraints to represent the generic graphical knowledge. We also developed two algorithms for case-based graphical knowledge acquisition and knowledge-based graphics recognition, respectively. Experiments have shown that our proposed framework is both efficient and effective for recognizing various types of graphic objects in engineering drawings.", "Integrating legacy plant and process information into engineering, control, and enterprise systems may significantly increase the efficiency of managerial and technical operations in industrial facilities. The first step towards the pursued data integration is the extraction of relevant information from existing engineering documents, many of which are stored in vector-graphics-compatible formats such as PDF. Accordingly, this paper is aimed at proposing a novel methodology for the automatic extraction of structural and connectivity information from vector-graphics-coded engineering documents. A case study of a piping and instrumentation diagram (P&ID) demonstrates the reliable performance of the approach for the recognition of symbols, annotations, and underlying connectivity." ] }
1901.11383
2952973608
One of the most common modes of representing engineering schematics are Piping and Instrumentation diagrams (P&IDs) that describe the layout of an engineering process flow along with the interconnected process equipment. Over the years, P&ID diagrams have been manually generated, scanned and stored as image files. These files need to be digitized for purposes of inventory management and updation, and easy reference to different components of the schematics. There are several challenging vision problems associated with digitizing real world P&ID diagrams. Real world P&IDs come in several different resolutions, and often contain noisy textual information. Extraction of instrumentation information from these diagrams involves accurate detection of symbols that frequently have minute visual differences between them. Identification of pipelines that may converge and diverge at different points in the image is a further cause for concern. Due to these reasons, to the best of our knowledge, no system has been proposed for end-to-end data extraction from P&ID diagrams. However, with the advent of deep learning and the spectacular successes it has achieved in vision, we hypothesized that it is now possible to re-examine this problem armed with the latest deep learning models. To that end, we present a novel pipeline for information extraction from P&ID sheets via a combination of traditional vision techniques and state-of-the-art deep learning models to identify and isolate pipeline codes, pipelines, inlets and outlets, and for detecting symbols. This is followed by association of the detected components with the appropriate pipeline. The extracted pipeline information is used to populate a tree-like data-structure for capturing the structure of the piping schematics. We evaluated proposed method on a real world dataset of P&ID sheets obtained from an oil firm and have obtained promising results.
However, we discovered a significant body of work on recognition of symbols in prior art. @cite_19 proposed Fourier Mellin Transform features to classify multi-oriented and multi-scaled patterns in engineering diagrams. Other models utilized for symbol recognition include Auto Associative neural networks @cite_4 , Deep Belief networks @cite_20 , and consistent attributed graphs (CAG) @cite_15 . There are also models that use a set of visual features which capture online stroke properties like orientation and endpoint location @cite_16 , and shape based matching between different symbols @cite_11 . We see that most of the prior work focuses on extracting symbols from such engineering diagrams or flow charts. To the best of our knowledge, there exists no work which has proposed an end-to-end pipeline for automating the information extraction from plant engineering diagrams such as P &ID.
{ "cite_N": [ "@cite_4", "@cite_19", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2125380992", "2085638554", "", "1568944112", "2322404333", "2057175746" ], "abstract": [ "Symbol recognition is a well-known problem in the field of graphics. A symbol can be defined as a structure within document that has a particular meaning in the context of the application. Due to their representational power, graph structures are usually used to represent line drawings images.An accurate vectorization constitutes a first approach to solve this goal. But vectorization only gives the segments constituting the document and their geometrical attributes.Interpreting a document such as P&ID (Process & Instrumentation)diagram requires an additional stage viz. recognition of symbols in terms of its shape. Usually a P&ID diagram contain several types of elements, symbols and structural connectivity. For those symbols that can be defined by a prototype pattern, we propose an iterative learning strategy based on Hopfield model to learn the symbols, for subsequent recognition in the P&ID diagram. In a typical shape recognition problem one has to account for transformation invariance. Here the transformation invariance is circumvented by using an iterative learning approach which can learn symbols with high degree of correlation.", "In this paper, we consider the general problem of technical document interpretation, as applied to the documents of the French Telephonic Operator, France Telecom. More precisely, we focus the content of this paper on the computation of a new set of features allowing the classification of multioriented and multiscaled patterns. This set of invariants is based on the Fourier–Mellin Transform. The interests of this computation rely on the excellent classification rate obtained with this method and also on using this Fourier–Mellin transform within a “filtering mode”, with which we can solve the well known difficult problem of connected character recognition.", "", "", "The recognition of pen-based visual patterns such as sketched symbols is amenable to supervised machine learning models such as neural networks. However, a sizable, labeled training corpus is often required to learn the high variations of freehand sketches. To circumvent the costs associated with creating a large training corpus, improve the recognition accuracy with only a limited amount of training samples and accelerate the development of sketch recognition system for novel sketch domains, we present a neural network training protocol that consists of three steps. First, a large pool of unlabeled, synthetic samples are generated from a small set of existing, labeled training samples. Then, a Deep Belief Network (DBN) is pre-trained with those synthetic, unlabeled samples. Finally, the pre-trained DBN is fine-tuned using the limited amount of labeled samples for classification. The training protocol is evaluated against supervised baseline approaches such as the nearest neighbor classifier and the neural network classifier. The benchmark data sets used are partitioned such that there are only a few labeled samples for training, yet a large number of labeled test cases featuring rich variations. Results suggest that our training protocol leads to a significant error reduction compared to the baseline approaches.", "We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set." ] }
1901.11383
2952973608
One of the most common modes of representing engineering schematics are Piping and Instrumentation diagrams (P&IDs) that describe the layout of an engineering process flow along with the interconnected process equipment. Over the years, P&ID diagrams have been manually generated, scanned and stored as image files. These files need to be digitized for purposes of inventory management and updation, and easy reference to different components of the schematics. There are several challenging vision problems associated with digitizing real world P&ID diagrams. Real world P&IDs come in several different resolutions, and often contain noisy textual information. Extraction of instrumentation information from these diagrams involves accurate detection of symbols that frequently have minute visual differences between them. Identification of pipelines that may converge and diverge at different points in the image is a further cause for concern. Due to these reasons, to the best of our knowledge, no system has been proposed for end-to-end data extraction from P&ID diagrams. However, with the advent of deep learning and the spectacular successes it has achieved in vision, we hypothesized that it is now possible to re-examine this problem armed with the latest deep learning models. To that end, we present a novel pipeline for information extraction from P&ID sheets via a combination of traditional vision techniques and state-of-the-art deep learning models to identify and isolate pipeline codes, pipelines, inlets and outlets, and for detecting symbols. This is followed by association of the detected components with the appropriate pipeline. The extracted pipeline information is used to populate a tree-like data-structure for capturing the structure of the piping schematics. We evaluated proposed method on a real world dataset of P&ID sheets obtained from an oil firm and have obtained promising results.
In literature, Connected Component (CC) analysis @cite_18 has been used extensively for extracting characters @cite_13 from images. However, connected components are extremely sensitive to noise and thresholding may not be suitable for P &ID text extraction. Hence, we utilize the recently invented Connectionist Temporal Proposal Network (CTPN) @cite_0 to detect text in the image with impressive accuracy. For line detection, we utilize Probabilistic hough transform (PHT) @cite_7 which is computationally efficient and fast version of the standard hough transform as it uses random sampling of edge points to find lines present in the image. We make use of PHT for determining all the lines present in P &ID sheets which are possible candidates for pipelines. In our paper, we propose the use of Fully convolutional neural network (FCN) based segmentation @cite_1 for detecting symbols because tranditional classification networks were unable to differentiate among different types of symbols due to very minute inter-class differences in visual appearances and presence of noisy and textual information present inside symbols. FCN incorporates contextual as well as spatial relationship of symbols in the image, which is often necessary for accurate detection and classification of P &ID symbols.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_1", "@cite_0", "@cite_13" ], "mid": [ "2083954025", "2026131180", "2952632681", "2519818067", "2785901268" ], "abstract": [ "In this paper, we present a new scene text detection algorithm based on two machine learning classifiers: one allows us to generate candidate word regions and the other filters out nontext ones. To be precise, we extract connected components (CCs) in images by using the maximally stable extremal region algorithm. These extracted CCs are partitioned into clusters so that we can generate candidate regions. Unlike conventional methods relying on heuristic rules in clustering, we train an AdaBoost classifier that determines the adjacency relationship and cluster CCs by using their pairwise relations. Then we normalize candidate word regions and determine whether each region contains text or not. Since the scale, skew, and color of each candidate can be estimated from CCs, we develop a text nontext classifier for normalized images. This classifier is based on multilayer perceptrons and we can control recall and precision rates with a single free parameter. Finally, we extend our approach to exploit multichannel information. Experimental results on ICDAR 2005 and 2011 robust reading competition datasets show that our method yields the state-of-the-art performance both in speed and accuracy.", "Abstract The Hough Transform for straight line detection is considered. It is shown that if just a small subset of the edge points in the image, selected at random, is used as input for the Hough Transform, the performance is often only slightly impaired, thus the execution time can be considerably shortened. The performance of the resulting “Probabilistic Hough Transform” is analysed. The analysis is supported by experimental evidence.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .", "In order to effectively detect faults and maintain heavy machines, a standard practice in several organizations is to conduct regular manual inspections. The procedure for conducting such inspections requires marking of the damaged components on a standardized inspection sheet which is then camera scanned. These sheets are marked for different faults in corresponding machine zones using hand-drawn arrows and text. As a result, the reading environment is highly unstructured and requires a domain expert while extracting the manually marked information. In this paper, we propose a novel pipeline to build an information extraction system for such machine inspection sheets, utilizing state-of-the-art deep learning and computer vision techniques. The pipeline proceeds in the following stages: (1) localization of different zones of the machine, arrows and text using a combination of template matching, deep learning and connected components, and (2) mapping the machine zone to the corresponding arrow head and the text segment to the arrow tail, followed by pairing them to get the correct damage code for each zone. Experiments were performed on a dataset collected from an anonymous real world manufacturing unit. Results demonstrate the efficacy of the proposed approach and we also report the accuracy for each step in the pipeline." ] }
1901.11417
2913832161
Fluid approximations have seen great success in approximating the macro-scale behaviour of Markov systems with a large number of discrete states. However, these methods rely on the continuous-time Markov chain (CTMC) having a particular population structure which suggests a natural continuous state-space endowed with a dynamics for the approximating process. We construct here a general method based on spectral analysis of the transition matrix of the CTMC, without the need for a population structure. Specifically, we use the popular manifold learning method of diffusion maps to analyse the transition matrix as the operator of a hidden continuous process. An embedding of states in a continuous space is recovered, and the space is endowed with a drift vector field inferred via Gaussian process regression. In this manner, we construct an ODE whose solution approximates the evolution of the CTMC mean, mapped onto the continuous space (known as the fluid limit).
In the case of pCTMCs, a more concise description in terms of the collective dynamics of population averages is however available. Starting with the seminal work of van Kampen @cite_0 , and motivated by the interpretation of pCTMCs as chemical reaction systems, several approximation schemes have been developed which relax the original pCTMC to a continuous stochastic process; see @cite_8 for a recent review.
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2051813288", "2517280042" ], "abstract": [ "In order to solve the master equation by a systematic approximation method, an expansion in powers of some parameter is needed. The appropriate parameter is the reciprocal size of the system, defined as the ratio of intensive and extensive variables. The lowest approximation yields the phenomenological law for the approach to equilibrium. The next approximation determines the mean square of the fluctuations about the phenomenological behavior. In equilibrium this approximation has the form of a linear Fokker–Planck equation. The higher approximations describe the effect of the non-linearity on the fluctuations, in particular on their spectral density. The method is applied to three examples: density fluctuations, Alkemade's diode, and Rayleigh's piston. The relation to the expansion recently given by Siegel is also discussed.", "Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the Chemical Master Equation. Despite its simple structure, no analytic solutions to the Chemical Master Equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics." ] }
1901.11417
2913832161
Fluid approximations have seen great success in approximating the macro-scale behaviour of Markov systems with a large number of discrete states. However, these methods rely on the continuous-time Markov chain (CTMC) having a particular population structure which suggests a natural continuous state-space endowed with a dynamics for the approximating process. We construct here a general method based on spectral analysis of the transition matrix of the CTMC, without the need for a population structure. Specifically, we use the popular manifold learning method of diffusion maps to analyse the transition matrix as the operator of a hidden continuous process. An embedding of states in a continuous space is recovered, and the space is endowed with a drift vector field inferred via Gaussian process regression. In this manner, we construct an ODE whose solution approximates the evolution of the CTMC mean, mapped onto the continuous space (known as the fluid limit).
Following Darling and Norris @cite_17 , we examine and formalise the aspects of pCTMCs which render them especially amenable to the fluid approximation. As mentioned, the first is that pCTMC state-spaces are countable and there exists an obvious ordering. We can therefore write a trivial linear mapping from the discrete, countable state-space @math to a continuous Euclidean space @math , where @math is the number of agent types in the system.
{ "cite_N": [ "@cite_17" ], "mid": [ "2147568833" ], "abstract": [ "We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: the classical stochastic epidemic, a population process model with fast and slow variables, and core-finding algorithms for large random hypergraphs." ] }
1901.11417
2913832161
Fluid approximations have seen great success in approximating the macro-scale behaviour of Markov systems with a large number of discrete states. However, these methods rely on the continuous-time Markov chain (CTMC) having a particular population structure which suggests a natural continuous state-space endowed with a dynamics for the approximating process. We construct here a general method based on spectral analysis of the transition matrix of the CTMC, without the need for a population structure. Specifically, we use the popular manifold learning method of diffusion maps to analyse the transition matrix as the operator of a hidden continuous process. An embedding of states in a continuous space is recovered, and the space is endowed with a drift vector field inferred via Gaussian process regression. In this manner, we construct an ODE whose solution approximates the evolution of the CTMC mean, mapped onto the continuous space (known as the fluid limit).
There are many ways to satisfy the above criteria, but a common one (used in pCTMCs) is hydrodynamic scaling'', where the increments of the @math -state Markov process mapped to the Euclidean space are @math and the jump rate is @math . The criteria above are derived formally in @cite_17 @cite_18 and ensure that:
{ "cite_N": [ "@cite_18", "@cite_17" ], "mid": [ "1535692381", "2147568833" ], "abstract": [ "A rescaled Markov chain converges uniformly in probability to the solution of an ordinary differential equation, under carefully specified assumptions. The presentation is much simpler than those in the outside literature. The result may be used to build parsimonious models of large random or pseudo-random systems.", "We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: the classical stochastic epidemic, a population process model with fast and slow variables, and core-finding algorithms for large random hypergraphs." ] }
1901.11344
2914116739
Recent years has witnessed dramatic progress of neural machine translation (NMT), however, the method of manually guiding the translation procedure remains to be better explored. Previous works proposed to handle such problem through lexcially-constrained beam search in the decoding phase. Unfortunately, these lexically-constrained beam search methods suffer two fatal disadvantages: high computational complexity and hard beam search which generates unexpected translations. In this paper, we propose to learn the ability of lexically-constrained translation with external memory, which can overcome the above mentioned disadvantages. For the training process, automatically extracted phrase pairs are extracted from alignment and sentence parsing, then further be encoded into an external memory. This memory is then used to provide lexically-constrained information for training through a memory-attention machanism. Various experiments are conducted on WMT Chinese to English and English to German tasks. All the results can demonstrate the effectiveness of our method.
The establishment of one efficient and effective machine translation system is attractive over the decades. Although systems based on statistical machine translation have been used in real life, the unpromising performance makes it difficult to be promoted. Recent works of neural machine translation have made this possible. proposed an attention mechanism for encoder-decoder neural machine translation system, which can sufficiently explore the context representation in the source sentences. Transformer @cite_1 is a more promising neural machine translation architecture with self-attention, which can achieve faster training speed and better performance.
{ "cite_N": [ "@cite_1" ], "mid": [ "2963403868" ], "abstract": [ "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature." ] }
1901.11344
2914116739
Recent years has witnessed dramatic progress of neural machine translation (NMT), however, the method of manually guiding the translation procedure remains to be better explored. Previous works proposed to handle such problem through lexcially-constrained beam search in the decoding phase. Unfortunately, these lexically-constrained beam search methods suffer two fatal disadvantages: high computational complexity and hard beam search which generates unexpected translations. In this paper, we propose to learn the ability of lexically-constrained translation with external memory, which can overcome the above mentioned disadvantages. For the training process, automatically extracted phrase pairs are extracted from alignment and sentence parsing, then further be encoded into an external memory. This memory is then used to provide lexically-constrained information for training through a memory-attention machanism. Various experiments are conducted on WMT Chinese to English and English to German tasks. All the results can demonstrate the effectiveness of our method.
External memory has been used in several works @cite_2 @cite_0 @cite_3 to enhance the quality of neural machine translation. For example, proposes to extract phrase table as recommendation memory for neural machine translation. However, this kind of phrase table is too noisy, which is also mentioned in . proposes to store the hidden context information into the memory, which can be used to calculate an additional probability of target word. Both of these two methods require a high quality translation alignment. @cite_4 proposes to annotate the source sentences with experts and use a copy-generator for rare word translation. However, the strong copy ability may cuase the loss of fluency. And aims to improve the performance of NMT by maintaining a updatable memory.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2744031566", "2953385323", "", "2951132175" ], "abstract": [ "Neural machine translation (NMT) has achieved notable success in recent times, however it is also widely recognized that this approach has limitations with handling infrequent words and word pairs. This paper presents a novel memory-augmented NMT (M-NMT) architecture, which stores knowledge about how words (usually infrequently encountered ones) should be translated in a memory and then utilizes them to assist the neural model. We use this memory mechanism to combine the knowledge learned from a conventional statistical machine translation system and the rules learned by an NMT system, and also propose a solution for out-of-vocabulary (OOV) words based on this framework. Our experiments on two Chinese-English translation tasks demonstrated that the M-NMT architecture outperformed the NMT baseline by @math and @math BLEU points on the two tasks, respectively. Additionally, we found this architecture resulted in a much more effective OOV treatment compared to competitive methods.", "Neural machine translation (NMT) has significantly improved the quality of automatic translation models. One of the main challenges in current systems is the translation of rare words. We present a generic approach to address this weakness by having external models annotate the training data as Experts, and control the model-expert interaction with a pointer network and reinforcement learning. Our experiments using phrase-based models to simulate Experts to complement neural machine translation models show that the model can be trained to copy the annotations into the output consistently. We demonstrate the benefit of our proposed framework in outof-domain translation scenarios with only lexical resources, improving more than 1.0 BLEU point in both translation directions English to Spanish and German to English", "", "Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance recently. However, several studies indicate that NMT often generates fluent but unfaithful translations. In this paper, we propose a method to alleviate this problem by using a phrase table as recommendation memory. The main idea is to add bonus to words worthy of recommendation, so that NMT can make correct predictions. Specifically, we first derive a prefix tree to accommodate all the candidate target phrases by searching the phrase translation table according to the source sentence. Then, we construct a recommendation word set by matching between candidate target phrases and previously translated target words by NMT. After that, we determine the specific bonus value for each recommendable word by using the attention vector and phrase translation probability. Finally, we integrate this bonus value into NMT to improve the translation results. The extensive experiments demonstrate that the proposed methods obtain remarkable improvements over the strong attentionbased NMT." ] }
1901.11259
2913092861
Convolutional neural networks have been widely used in content-based image retrieval. To better deal with large-scale data, the deep hashing model is proposed as an effective method, which maps an image to a binary code that can be used for hashing search. However, most existing deep hashing models only utilize fine-level semantic labels or convert them to similar dissimilar labels for training. The natural semantic hierarchy structures are ignored in the training stage of the deep hashing model. In this paper, we present an effective algorithm to train a deep hashing model that can preserve a semantic hierarchy structure for large-scale image retrieval. Experiments on two datasets show that our method improves the fine-level retrieval performance. Meanwhile, our model achieves state-of-the-art results in terms of hierarchical retrieval.
DSPH @cite_13 firstly proposes to utilize the pair-wise label to train the end-to-end deep hashing model. HashNet @cite_0 defines a weighted maximum likelihood of pairwise logistic for balance the similar and dissimilar labels. DTSH @cite_22 extends the pair-wise supervision to the triplet one for more effectively capturing the semantic information. To fully use the semantic class labels, several works design loss functions directly based on the class labels. SSDH @cite_17 utilizes the softmax classifier to train the hashing model. DCWH @cite_12 constructs a Gaussian distribution-based objective function to take the advantage of the class-level information. DSRH @cite_15 and DSDH @cite_10 both combine the pair-wise and class-level supervisions.
{ "cite_N": [ "@cite_22", "@cite_10", "@cite_0", "@cite_15", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2565993688", "2621176975", "2586811659", "2531549126", "2963398644", "2791396492", "2586937979" ], "abstract": [ "Hashing is one of the most popular and powerful approximate nearest neighbor search techniques for large-scale image retrieval. Most traditional hashing methods first represent images as off-the-shelf visual features and then produce hashing codes in a separate stage. However, off-the-shelf visual features may not be optimally compatible with the hash code learning procedure, which may result in sub-optimal hash codes. Recently, deep hashing methods have been proposed to simultaneously learn image features and hash codes using deep neural networks and have shown superior performance over traditional hashing methods. Most deep hashing methods are given supervised information in the form of pairwise labels or triplet labels. The current state-of-the-art deep hashing method DPSH [1], which is based on pairwise labels, performs image feature learning and hash code learning simultaneously by maximizing the likelihood of pairwise similarities. Inspired by DPSH [1], we propose a triplet label based deep hashing method which aims to maximize the likelihood of the given triplet labels. Experimental results show that our method outperforms all the baselines on CIFAR-10 and NUS-WIDE datasets, including the state-of-the-art method DPSH [1] and all the previous triplet label based deep hashing methods.", "With the rapid growth of image and video data on the web, hashing has been extensively studied for image or video search in recent years. Benefit from recent advances in deep learning, deep hashing methods have achieved promising results for image retrieval. However, there are some limitations of previous deep hashing methods (e.g., the semantic information is not fully exploited). In this paper, we develop a deep supervised discrete hashing algorithm based on the assumption that the learned binary codes should be ideal for classification. Both the pairwise label information and the classification information are used to learn the hash codes within one stream framework. We constrain the outputs of the last layer to be binary codes directly, which is rarely investigated in deep hashing algorithm. Because of the discrete nature of hash codes, an alternating minimization method is used to optimize the objective function. Experimental results have shown that our method outperforms current state-of-the-art methods on benchmark datasets.", "Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.", "Hashing techniques have been intensively investigated for large scale vision applications. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised hashing methods only construct similarity-preserving hash codes. Observing that semantic structures carry complementary information, we propose the idea of cotraining for hashing, by jointly learning projections from image representations to hash codes and classification. Specifically, a novel deep semantic-preserving and ranking-based hashing (DSRH) architecture is presented, which consists of three components: a deep CNN for learning image representations, a hash stream of a binary mapping layer by evenly dividing the learnt representations into multiple bags and encoding each bag into one hash bit, and a classification stream. Mean-while, our model is learnt under two constraints at the top loss layer of hash stream: a triplet ranking loss and orthogonality constraint. The former aims to preserve the relative similarity ordering in the triplets, while the latter makes different hash bit as independent as possible. We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-the-art hashing techniques.", "Recent years have witnessed wide application of hashing for large-scale image retrieval. However, most existing hashing methods are based on handcrafted features which might not be optimally compatible with the hashing procedure. Recently, deep hashing methods have been proposed to perform simultaneous feature learning and hash-code learning with deep neural networks, which have shown better performance than traditional hashing methods with hand-crafted features. Most of these deep hashing methods are supervised whose supervised information is given with triplet labels. For another common application scenario with pairwise labels, there have not existed methods for simultaneous feature learning and hash-code learning. In this paper, we propose a novel deep hashing method, called deep pairwise-supervised hashing (DPSH), to perform simultaneous feature learning and hash-code learning for applications with pairwise labels. Experiments on real datasets show that our DPSH method can outperform other methods to achieve the state-of-the-art performance in image retrieval applications.", "Deep supervised hashing has emerged as an influential solution to large-scale semantic image retrieval problems in computer vision. In the light of recent progress, convolutional neural network based hashing methods typically seek pair-wise or triplet labels to conduct the similarity preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar dissimilar labels, which limits the retrieval performance. Generally, pair-wise or triplet losses not only suffer from expensive training costs but also lack in extracting sufficient semantic information. In this regard, we propose a novel deep supervised hashing model to learn more compact class-level similarity preserving binary codes. Our deep learning based model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlap part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on four large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance. Moreover, when training samples are limited, our method surpasses other supervised deep hashing methods with non-negligible margins.", "This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off , and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed." ] }
1901.11259
2913092861
Convolutional neural networks have been widely used in content-based image retrieval. To better deal with large-scale data, the deep hashing model is proposed as an effective method, which maps an image to a binary code that can be used for hashing search. However, most existing deep hashing models only utilize fine-level semantic labels or convert them to similar dissimilar labels for training. The natural semantic hierarchy structures are ignored in the training stage of the deep hashing model. In this paper, we present an effective algorithm to train a deep hashing model that can preserve a semantic hierarchy structure for large-scale image retrieval. Experiments on two datasets show that our method improves the fine-level retrieval performance. Meanwhile, our model achieves state-of-the-art results in terms of hierarchical retrieval.
Recently, the semantic hierarchy learning problem has been addressed in several works. The hierarchical semantic image retrieval model is proposed in @cite_14 . This work encodes hierarchy in semantic similarity. By combining the coarse and fine level labels, work in @cite_18 proves that the image classification performance can be improved. The similar idea has been shared in @cite_5 , which considers using a hierarchical training strategy to handle the face recognition task. Recent work in @cite_5 integrates the semantic relationship between class level into deep learning. The hierarchical similarity learning problem has also been addressed for deep hashing in SHDH @cite_21 . SHDH tackles the semantic hierarchy learning by proposing a weighted Hamming distance. However, SHDH also uses pair-wise relation, which has been proved not as efficient as using class-level labels @cite_17 @cite_12 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_21", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "2620400662", "1978962787", "2606348288", "2891244467", "2791396492", "2586937979" ], "abstract": [ "The performance of classifiers is in general improved by designing models with a large number of parameters or by ensembles. We tackle the problem of classification of coarse and fine grained categories, which share a semantic relationship. On being given the predictions that a classifier has for a given test sample, we adjust the probabilities according to the semantics of the categories, on which the classifier was trained. We present an algorithm for doing such an adjustment and we demonstrate improvement for both coarse and fine grained classification. We evaluate our method using convolutional neural networks. However, the algorithm can be applied to any classifier which outputs category wise probabilities.", "This paper addresses the problem of similar image retrieval, especially in the setting of large-scale datasets with millions to billions of images. The core novel contribution is an approach that can exploit prior knowledge of a semantic hierarchy. When semantic labels and a hierarchy relating them are available during training, significant improvements over the state of the art in similar image retrieval are attained. While some of this advantage comes from the ability to use additional information, experiments exploring a special case where no additional data is provided, show the new approach can still outperform OASIS [6], the current state of the art for similarity learning. Exploiting hierarchical relationships is most important for larger scale problems, where scalability becomes crucial. The proposed learning approach is fundamentally parallelizable and as a result scales more easily than previous work. An additional contribution is a novel hashing scheme (for bilinear similarity on vectors of probabilities, optionally taking into account hierarchy) that is able to reduce the computational cost of retrieval. Experiments are performed on Caltech256 and the larger ImageNet dataset.", "Recently, hashing methods have been widely used in large-scale image retrieval. However, most existing hashing methods did not consider the hierarchical relation of labels, which means that they ignored the rich information stored in the hierarchy. Moreover, most of previous works treat each bit in a hash code equally, which does not meet the scenario of hierarchical labeled data. In this paper, we propose a novel deep hashing method, called supervised hierarchical deep hashing (SHDH), to perform hash code learning for hierarchical labeled data. Specifically, we define a novel similarity formula for hierarchical labeled data by weighting each layer, and design a deep convolutional neural network to obtain a hash code for each data point. Extensive experiments on several real-world public datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.", "Recent progress of face recognition benefits a lot from large-scale face datasets with deep Convoluitonal Neural Networks(CNN). However, when dataset contains a large number of subjects but with few samples for each subject, conventional CNN with softmax loss is heavily prone to overfitting. To address this issue, we propose a hierarchical training schema to optimize CNN with coarse-to-fine class labels, referred to as Hit-CNN. Firstly trained with coarse class labels and then refined with fine class labels, Hit-CNN is enabled the to capture the distribution of data from major variations to fine variations progressively, which can effectively relieve the overfitting and lead to better generalization. In this work, the hierarchical coarse-to-fine class labels are obtained via hierarchical k-means clustering according to the face identities. Evaluated on two face datasets, the proposed Hit-CNN provides better results compared with the conventional CNN under the circumstances of large-scale data with few samples per subject.", "Deep supervised hashing has emerged as an influential solution to large-scale semantic image retrieval problems in computer vision. In the light of recent progress, convolutional neural network based hashing methods typically seek pair-wise or triplet labels to conduct the similarity preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar dissimilar labels, which limits the retrieval performance. Generally, pair-wise or triplet losses not only suffer from expensive training costs but also lack in extracting sufficient semantic information. In this regard, we propose a novel deep supervised hashing model to learn more compact class-level similarity preserving binary codes. Our deep learning based model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlap part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on four large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance. Moreover, when training samples are limited, our method surpasses other supervised deep hashing methods with non-negligible margins.", "This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off , and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed." ] }
1901.11448
2952848538
The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
Multi-Domain Learning (MDL) MDL addresses training a single model capable of solving multiple datasets (domains). If the data is relatively small and the domains are similar, this sharing can lead to improved performance compared to training a separate model per domain @cite_25 . On the other hand, for diverse domains with large data MDL may under-perform a single model per domain; but is nonetheless is of interest due to the simplicity of a single model and its better memory scalability compared to a separate model per domain @cite_27 @cite_29 . We mention MDL here, because DG methods typically train on multiple source domains as per MDL -- but furthermore aim to generalise to novel held out domains.
{ "cite_N": [ "@cite_27", "@cite_29", "@cite_25" ], "mid": [ "2963211188", "", "1703030490" ], "abstract": [ "There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.", "", "In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives." ] }
1901.11448
2952848538
The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
Most existing DG approaches can be split into three categories: feature-based methods, classifier-based methods, and data augmentation methods. Feature-based methods: These aim to generate a domain-invariant representation. For example where the distance between the empirical distributions of the source and target examples is minimized @cite_2 @cite_22 @cite_3 . Classifier-based methods: Some aim to enhance generalisation by fusing multiple sub-classifiers learned from source domains @cite_21 @cite_20 @cite_34 , and others learn an improved classifier regularizer using source samples -- notably the recently proposed MetaReg @cite_8 . Data augmentation methods: CrossGrad @cite_19 generates provides domain-guided perturbations of input instances, which are then used to train a more robust model. defines an adaptive data augmentation scheme by appending adversarial examples at each iteration. Our approach falls into the feature-based category, but meta-learns a feature-critic network to train a robust shared feature extractor.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_21", "@cite_3", "@cite_19", "@cite_2", "@cite_34", "@cite_20" ], "mid": [ "", "2889965839", "2120149881", "2963043696", "2963838962", "2798658180", "", "" ], "abstract": [ "", "Training models that generalize to unseen domains at test time is a problem of fundamental importance in machine learning. In this work, we propose using regularization to capture this notion of domain generalization. We pose the problem of finding such a regularization function in a Learning to Learn (or) Meta Learning framework. The notion of domain generalization is explicitly captured by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization.", "Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods.", "Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.", "We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and (2) data augmentation is a more stable and accurate method than domain adversarial training.", "In this paper, we tackle the problem of domain generalization: how to learn a generalized feature representation for an \"unseen\" target domain by taking the advantage of multiple seen source-domain data. We present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization. To be specific, we extend adversarial autoencoders by imposing the Maximum Mean Discrepancy (MMD) measure to align the distributions among different domains, and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, the learned feature representation is supposed to be universal to the seen source domains because of the MMD regularization, and is expected to generalize well on the target domain because of the introduction of the prior distribution. We proposed an algorithm to jointly train different components of our proposed framework. Extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods.", "", "" ] }
1901.11448
2952848538
The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
Few studies have considered the heterogeneous DG setting, where the domains do not share the same label space. In this setting, we do not expect the to generalize directly to the target domain (impossible due to the change in label space), but we do aim to improve the robustness of a source-domain trained in terms of its generalisation to successfully represent a novel problem. Most existing DG methods cannot be applied here besides Domain Adaptive Neural Networks @cite_5 and CrossGrad @cite_19 . We show how to modify MetaReg @cite_8 and Reptile @cite_37 algorithms to address this DG setting. The most relevant benchmark is the Visual Decathlon (VD) @cite_27 . The VD benchmark was proposed to evaluate multi-domain and lifelong @cite_31 learning. VD competitors should originally learn a model covering all ten domains, with low parameter growth. We re-purpose the VD benchmark for DG evaluation. In this case a model trained on the six largest datasets in VD should produce a feature which provides a general and robust enough encoding to allow the four smaller data-sets to be classified with a simple shallow classifier.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_19", "@cite_27", "@cite_5", "@cite_31" ], "mid": [ "2795900505", "2889965839", "2963838962", "2963211188", "1731081199", "2962707369" ], "abstract": [ "This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We analyze a family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates. This family includes and generalizes first-order MAML, an approximation to MAML obtained by ignoring second-order derivatives. It also includes Reptile, a new algorithm that we introduce here, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. We expand on the results from showing that first-order meta-learning algorithms perform well on some well-established benchmarks for few-shot classification, and we provide theoretical analysis aimed at understanding why these algorithms work.", "Training models that generalize to unseen domains at test time is a problem of fundamental importance in machine learning. In this work, we propose using regularization to capture this notion of domain generalization. We pose the problem of finding such a regularization function in a Learning to Learn (or) Meta Learning framework. The notion of domain generalization is explicitly captured by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization.", "We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and (2) data augmentation is a more stable and accurate method than domain adversarial training.", "There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13 ) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3 of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior." ] }
1901.11448
2952848538
The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
Meta-Learning Meta-learning (a.k.a. learning to learn, @cite_15 @cite_13 ) has received resurgence in interest recently with applications in few-shot learning @cite_9 @cite_40 and beyond @cite_16 . In few-shot meta-learning, a common strategy is to simulate the few-shot learning scenario by randomly drawing few-shot train test episodes from the full training set. Training the network to solve such episodes tunes it to perform well at few-shot learning. We adapt this episodic training strategy by creating virtual training and testing splits of our source domains in each mini-batch.
{ "cite_N": [ "@cite_9", "@cite_40", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2742093937", "2601450892", "1486056878", "", "99485931" ], "abstract": [ "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.", "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset.", "We study task sequences that allow for speeding up the learner‘s average reward intake through appropriate shifts of inductive bias (changes of the learner‘s policy). To evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the “success-story algorithm” (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner‘s policy modification strategy within the policy itself (incremental self-improvement). Our inductive transfer case studies involve complex, partially observable environments where traditional reinforcement learning fails.", "", "Preface. Part I: Overview Articles. 1. Learning to Learn: Introduction and Overview S. Thrun, L. Pratt. 2. A Survey of Connectionist Network Reuse Through Transfer L. Pratt, B. Jennings. 3. Transfer in Cognition A. Robins. Part II: Prediction. 4. Theoretical Models of Learning to Learn J. Baxter. 5. Multitask Learning R. Caruana. 6. Making a Low-Dimensional Representation Suitable for Diverse Tasks N. Intrator, S. Edelman. 7. The Canonical Distortion Measure for Vector Quantization and Function Approximation J. Baxter. 8. Lifelong Learning Algorithms S. Thrun. Part III: Relatedness. 9. The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness D.L. Silver, R.E. Mercer. 10. Clustering Learning Tasks and the Selective Cross-Task Transfer of Knowledge S. Thrun, J. O'Sullivan. Part IV: Control. 11. CHILD: A First Step Towards Continual Learning M.B. Ring. 12. Reinforcement Learning with Self-Modifying Policies J. Schmidhuber, et al 13. Creating Advice-Taking Reinforcement Learners R. Maclin, J.W. Shavlik. Contributing Authors. Index." ] }
1901.11448
2952848538
The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
A few methods have applied related episodic meta-learning strategies in DG @cite_2 @cite_8 . MLDG @cite_2 defined a heuristic gradient descent update rule based on the gradients of the simulated training and testing domains. MetaReg @cite_8 trains the weights of the 's regulariser so as to produce a more general classifier for a fixed feature extractor. In contrast, our produces a more general that can be used with any classifier. This is achieved by simultaneously learning an auxiliary loss function @cite_24 (i.e., the critic network) that helps to train the feature extractor for improved domain invariance.
{ "cite_N": [ "@cite_24", "@cite_8", "@cite_2" ], "mid": [ "2604538595", "2889965839", "2798658180" ], "abstract": [ "We approach structured output prediction by optimizing a deep value network (DVN) to precisely estimate the task loss on different output configurations for a given input. Once the model is trained, we perform inference by gradient descent on the continuous relaxations of the output variables to find outputs with promising scores from the value network. When applied to image segmentation, the value network takes an image and a segmentation mask as inputs and predicts a scalar estimating the intersection over union between the input and ground truth masks. For multi-label classification, the DVN's objective is to correctly predict the F1 score for any potential label configuration. The DVN framework achieves the state-of-the-art results on multi-label prediction and image segmentation benchmarks.", "Training models that generalize to unseen domains at test time is a problem of fundamental importance in machine learning. In this work, we propose using regularization to capture this notion of domain generalization. We pose the problem of finding such a regularization function in a Learning to Learn (or) Meta Learning framework. The notion of domain generalization is explicitly captured by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization.", "In this paper, we tackle the problem of domain generalization: how to learn a generalized feature representation for an \"unseen\" target domain by taking the advantage of multiple seen source-domain data. We present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization. To be specific, we extend adversarial autoencoders by imposing the Maximum Mean Discrepancy (MMD) measure to align the distributions among different domains, and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, the learned feature representation is supposed to be universal to the seen source domains because of the MMD regularization, and is expected to generalize well on the target domain because of the introduction of the prior distribution. We proposed an algorithm to jointly train different components of our proposed framework. Extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods." ] }
1901.11462
2952666417
Conversational agents have begun to rise both in the academic (in terms of research) and commercial (in terms of applications) world. This paper investigates the task of building a non-goal driven conversational agent, using neural network generative models and analyzes how the conversation context is handled. It compares a simpler Encoder-Decoder with a Hierarchical Recurrent Encoder-Decoder architecture, which includes an additional module to model the context of the conversation using previous utterances information. We found that the hierarchical model was able to extract relevant context information and include them in the generation of the output. However, it performed worse (35-40 ) than the simple Encoder-Decoder model regarding both grammatically correct output and meaningful response. Despite these results, experiments demonstrate how conversations about similar topics appear close to each other in the context space due to the increased frequency of specific topic-related words, thus leaving promising directions for future research and how the context of a conversation can be exploited.
The traditional approach for Conversational Agents follows a modular approach, dividing the process into three modules: a Natural Language Understanding (NLU) unit, a Dialogue Manager and a Natural Language Generation module (NLG). The NLU module will process the input and extract useful information. This information is then used by the Dialogue Manager to update internal states, send a query to a knowledge-based system, or simply follow precoded instructions. Finally, the NLG will use the information from the Dialogue Manager to generate the output sentence. The simplest technique used for NLU is to spot certain keywords in the input, often working together with a script-based Dialogue Manager. However, throughout the years there have been many attempts to improve the NLU unit to better extract text information, using techniques including statistical modeling of language @cite_2 , skip-gram models @cite_4 and, more recently, deep neural networks @cite_3 . Eventually, with the rise of Deep Learning in recent years, Dialogue Systems research has mainly focused on end-to-end models, capable of including all 3 modules in a single deep neural network, trained on a large dataset. One end-to-end RNN architecture proved particularly successful in recent years is the Encoder-Decoder.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2153579005", "2117130368", "1574901103" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "Statistical approaches to processing natural language text have become dominant in recent years. This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations. The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications." ] }
1901.11462
2952666417
Conversational agents have begun to rise both in the academic (in terms of research) and commercial (in terms of applications) world. This paper investigates the task of building a non-goal driven conversational agent, using neural network generative models and analyzes how the conversation context is handled. It compares a simpler Encoder-Decoder with a Hierarchical Recurrent Encoder-Decoder architecture, which includes an additional module to model the context of the conversation using previous utterances information. We found that the hierarchical model was able to extract relevant context information and include them in the generation of the output. However, it performed worse (35-40 ) than the simple Encoder-Decoder model regarding both grammatically correct output and meaningful response. Despite these results, experiments demonstrate how conversations about similar topics appear close to each other in the context space due to the increased frequency of specific topic-related words, thus leaving promising directions for future research and how the context of a conversation can be exploited.
The use of Encoder-Decoder architectures for natural language processing was first proposed as a solution for text translation, in 2014 by @cite_7 . From then on, the architecture has been applied to many other tasks, including conversational agents @cite_6 . However, generating responses was found to be considerably more difficult than translating between languages, probably due to a broader range of possible correct answers to any given input. A limitation of Encoder-Decoder models to produce meaningful conversations is the fact that any output is only influenced by the latest question. Thus, important factors are ignored, such as the context of the conversation, the speaker, and information provided in previous inputs. In 2015, proposed an updated version of the Encoder-Decoder architecture, called Hierarchical Recurrent Encoder Decoder @cite_8 (HRED), originally used for query suggestions. In their paper, they demonstrate that the architecture is capable of using context information extracted by previous queries to generate more appropriate query suggestions. This paper will attempt to apply such architecture to a dialogue system.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_8" ], "mid": [ "2399880602", "2950635152", "1993378086" ], "abstract": [ "Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving context awareness is challenging due to data sparsity. We present a novel hierarchical recurrent encoder-decoder architecture that makes possible to account for sequences of previous queries of arbitrary lengths. As a result, our suggestions are sensitive to the order of queries in the context while avoiding data sparsity. Additionally, our model can suggest for rare, or long-tail, queries. The produced suggestions are synthetic and are sampled one word at a time, using computationally cheap decoding techniques. This is in contrast to current synthetic suggestion models relying upon machine learning pipelines and hand-engineered feature sets. Results show that our model outperforms existing context-aware approaches in a next query prediction setting. In addition to query suggestion, our architecture is general enough to be used in a variety of other applications." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
As might be expected, filter-based FS algorithms have asymptotic complexities that depend on the number of features and or instances in a dataset. Many algorithms, such as the CFS, have quadratic complexities, while the most frequently used algorithms have at least linear complexities @cite_26 . This is why, in recent years, many attempts have been made to achieve more scalable FS methods. In what follows, we analyse recent work on the design of new scalable FS methods according to parallelization approaches: (i) search-oriented, (ii) dataset-split-oriented, or (iii) filter-oriented.
{ "cite_N": [ "@cite_26" ], "mid": [ "323404752" ], "abstract": [ "The explosion of big data has posed important challenges to researchers.Feature selection is paramount when dealing with high-dimensional datasets.We review the state-of-the-art and recent contributions in feature selection.The emerging challenges in feature selection are identified and discussed. In an era of growing data complexity and volume and the advent of big data, feature selection has a key role to play in helping reduce high-dimensionality in machine learning problems. We discuss the origins and importance of feature selection and outline recent contributions in a range of applications, from DNA microarray analysis to face recognition. Recent years have witnessed the creation of vast datasets and it seems clear that these will only continue to grow in size and number. This new big data scenario offers both opportunities and challenges to feature selection researchers, as there is a growing need for scalable yet efficient feature selection methods, given that existing methods are likely to prove inadequate." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
@cite_8 developed parallel versions of three forward-search-based FS algorithms, where a wrapper with a logistic regression classifier is used to guide a search parallelized using the MapReduce model.
{ "cite_N": [ "@cite_8" ], "mid": [ "2102529789" ], "abstract": [ "The set of features used by a learning algorithm can have a dramatic impact on the performance of that algorithm. Including extraneous features can make the learning problem harder by adding useless, noisy dimensions that lead to over-fitting and increased computational complexity. Conversely, leaving out useful features can deprive the model of important signals. The problem of feature selection is to find a subset of features that allows the learning algorithm to learn the “best” model in terms of measures such as accuracy or model simplicity. The problem of feature selection continues to grow in both importance and difficulty as extremely high-dimensional data sets become the standard in real-world machine learning tasks. Scalability can become a problem for even simple approaches. For example, common feature selection approaches that evaluate each new feature by training a new model containing that feature require a learning a linear number of models each time they add a new feature. This computational cost can add up quickly when we are iteratively adding many new features. Even techniques that use relatively computationally inexpensive tests of a feature’s value, such as mutual information, require at least linear time in the number of features being evaluated. As a simple illustrative example consider the task of classifying websites. In this case the data set could easily contain many millions of examples. Just including very basic features such as text unigrams on the page or HTML tags could easily provide many thousands of potential features for the model. Considering more complex attributes such as bigrams of words" ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
@cite_12 addressed the FS scaling problem using an asynchronous search approach, given that synchronous search, as commonly performed, can lead to efficiency losses due to the inactivity of some processors waiting for other processors to end their tasks. In their tests, they first obtained an initial reduction using a mutual information (MI) @cite_9 filter and then evaluated subsets using a random forest (RF) @cite_14 classifier. However, as stated by those authors, any other approach could be used for subset evaluation.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_12" ], "mid": [ "2154053567", "1930624869", "2587322366" ], "abstract": [ "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy.", "Decision trees are attractive classifiers due to their high execution speed. But trees derived with traditional methods often cannot be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data. The limitation on complexity usually means suboptimal accuracy on training data. Following the principles of stochastic modeling, we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data. The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their classification in complementary ways, and their combined classification can be monotonically improved. The validity of the method is demonstrated through experiments on the recognition of handwritten digits.", "Reducing the dimensionality of datasets is a fundamental step in the task of building a classification model. Feature selection is the process of selecting a smaller subset of features from the original one in order to enhance the performance of the classification model. The problem is known to be NP-hard, and despite the existence of several algorithms there is not one that outperforms the others in all scenarios. Due to the complexity of the problem usually feature selection algorithms have to compromise the quality of their solutions in order to execute in a practicable amount of time. Parallel computing techniques emerge as a potential solution to tackle this problem. There are several approaches that already execute feature selection in parallel resorting to synchronous models. These are preferred due to their simplicity and capability to use with any feature selection algorithm. However, synchronous models implement pausing points during the execution flow, which decrease the parallel performance. In this paper, we discuss the challenges of executing feature selection algorithms in parallel using asynchronous models, and present a feature selection algorithm that favours these models. Furthermore, we present two strategies for an asynchronous parallel execution not only of our algorithm but of any other feature selection approach. The first strategy solves the problem using the distributed memory paradigm, while the second exploits the use of shared memory. We evaluate the parallel performance of our strategies using up to 32 cores. The results show near linear speedups for both strategies, with the shared memory strategy outperforming the distributed one. Additionally, we provide an example of adapting our strategies to execute the Sequential forward Search asynchronously. We further test this version versus a synchronous one. Our results revealed that, by using an asynchronous strategy, we are able to save an average of 7.5 of the execution time." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
@cite_23 used the MapReduce model to implement a wrapper-based evolutionary search FS method. The dataset was split by instances and the FS method was applied to each resulting subset. Simple majority voting was used as a reduction step for the selected features and the final subset of feature was selected according to a user-defined threshold. All tests were carried out using the EPSILON dataset, which we also use here (see ).
{ "cite_N": [ "@cite_23" ], "mid": [ "1952835952" ], "abstract": [ "Nowadays, many disciplines have to deal with big datasets that additionally involve a high number of features. Feature selection methods aim at eliminating noisy, redundant, or irrelevant features that may deteriorate the classification performance. However, traditional methods lack enough scalability to cope with datasets of millions of instances and extract successful results in a delimited time. This paper presents a feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets. The algorithm decomposes the original dataset in blocks of instances to learn from them in the map phase; then, the reduce phase merges the obtained partial results into a final vector of feature weights, which allows a flexible application of the feature selection procedure using a threshold to determine the selected subset of features. The feature selection method is evaluated by using three well-known classifiers (SVM, Logistic Regression, and Naive Bayes) implemented within the Spark framework to address big data problems. In the experiments, datasets up to 67 millions of instances and up to 2000 attributes have been managed, showing that this is a suitable framework to perform evolutionary feature selection, improving both the classification accuracy and its runtime when dealing with big data problems." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
Bol 'on- @cite_39 proposed a framework to deal with high dimensionality data by first optionally ranking features using a FS filter, then partitioning vertically by dividing the data according to features (columns) rather than, as commonly done, according to instances (rows). After partitioning, another FS filter is applied to each partition, and finally, a merging procedure guided by a classifier obtains a single set of features. The authors experiment with five commonly used FS filters for the partitions, namely, CFS @cite_33 , Consistency @cite_0 , INTERACT @cite_30 , Information Gain @cite_6 and ReliefF @cite_13 , and with four classifiers for the final merging, namely, C4.5 @cite_3 , Naive Bayes @cite_40 , @math -Nearest Neighbors @cite_22 and SVM @cite_35 , show that their own approach significantly reduces execution times while maintaining and, in some cases, even improving accuracy.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_22", "@cite_6", "@cite_39", "@cite_0", "@cite_3", "@cite_40", "@cite_13" ], "mid": [ "167115076", "2156909104", "", "2147169507", "2149706766", "", "2169038408", "2499480955", "179179905", "1808644423" ], "abstract": [ "Feature interaction presents a challenge to feature selection for classification. A feature by itself may have little correlation with the target concept, but when it is combined with some other features, they can be strongly correlated with the target concept. Unintentional removal of these features can result in poor classification performance. Handling feature interaction can be computationally intractable. Recognizing the presence of feature interaction, we propose to efficiently handle feature interaction to achieve efficient feature selection and present extensive experimental results of evaluation.", "Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.", "", "Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.", "The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.", "", "Feature selection is an effective technique in dealing with dimensionality reduction. For classification, it is used to find an \"optimal\" subset of relevant features such that the overall accuracy of classification is increased while the data size is reduced and the comprehensibility is improved. Feature selection methods contain two important aspects: evaluation of a candidate feature subset and search through the feature space. Existing algorithms adopt various measures to evaluate the goodness of feature subsets. This work focuses on inconsistency measure according to which a feature subset is inconsistent if there exist at least two instances with same feature values but with different class labels. We compare inconsistency measure with other measures and study different search strategies such as exhaustive, complete, heuristic and random search, that can be applied to this measure. We conduct an empirical study to examine the pros and cons of these search methods, give some guidelines on choosing a search method, and compare the classifier error rates before and after feature selection.", "", "The naive Bayes classifier greatly simplify learning by assuming that features are independent given class. Although independence is generally a poor assumption, in practice naive Bayes often competes well with more sophisticated classifiers. Our broad goal is to understand the data characteristics which affect the performance of naive Bayes. Our approach uses Monte Carlo simulations that allow a systematic study of classification accuracy for several classes of randomly generated problems. We analyze the impact of the distribution entropy on the classification error, showing that low-entropy feature distributions yield good performance of naive Bayes. We also demonstrate that naive Bayes works well for certain nearlyfunctional feature dependencies, thus reaching its best performance in two opposite cases: completely independent features (as expected) and functionally dependent features (which is surprising). Another surprising result is that the accuracy of naive Bayes is not directly correlated with the degree of feature dependencies measured as the classconditional mutual information between the features. Instead, a better predictor of naive Bayes accuracy is the amount of information about the class that is lost because of the independence assumption.", "In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
@cite_24 described a distributed parallel FS method based on a variance preservation criterion using the proprietary software SAS High-Performance Analytics. http: www.sas.com en_us software high-performance-analytics.html One remarkable characteristic of the method is its support not only for supervised FS, but also for unsupervised FS where no label information is available. Their experiments were carried out with datasets with both high dimensionality and a high number of instances.
{ "cite_N": [ "@cite_24" ], "mid": [ "1994252348" ], "abstract": [ "Advances in computer technologies have enabled corporations to accumulate data at an unprecedented speed. Large-scale business data might contain billions of observations and thousands of features, which easily brings their scale to the level of terabytes. Most traditional feature selection algorithms are designed and implemented for a centralized computing architecture. Their usability significantly deteriorates when data size exceeds tens of gigabytes. High-performance distributed computing frameworks and protocols, such as the Message Passing Interface (MPI) and MapReduce, have been proposed to facilitate software development on grid infrastructures, enabling analysts to process large-scale problems efficiently. This paper presents a novel large-scale feature selection algorithm that is based on variance analysis. The algorithm selects features by evaluating their abilities to explain data variance. It supports both supervised and unsupervised feature selection and can be readily implemented in most distributed computing environments. The algorithm was implemented as a SAS High-Performance Analytics procedure, which can read data in distributed form and perform parallel feature selection in both symmetric multiprocessing mode (SMP) and massively parallel processing mode (MPP). Experimental results demonstrated the superior performance of the proposed method for large scale feature selection." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
Ram 'irez- @cite_28 described scalable versions of the popular mRMR @cite_9 FS filter that included a distributed version using Spark. The authors showed that their version that leveraged the power of a cluster of computers could perform much faster than the original and processed much larger datasets.
{ "cite_N": [ "@cite_28", "@cite_9" ], "mid": [ "2475596014", "2154053567" ], "abstract": [ "With the advent of large-scale problems, feature selection has become a fundamental preprocessing step to reduce input dimensionality. The minimum-redundancy-maximum-relevance (mRMR) selector is considered one of the most relevant methods for dimensionality reduction due to its high accuracy. However, it is a computationally expensive technique, sharply affected by the number of features. This paper presents fast-mRMR, an extension of mRMR, which tries to overcome this computational burden. Associated with fast-mRMR, we include a package with three implementations of this algorithm in several platforms, namely, CPU for sequential execution, GPU (graphics processing units) for parallel computing, and Apache Spark for distributed computing using big data technologies.", "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
In a previous work @cite_36 , using the Spark computing model we designed a distributed version of the ReliefF @cite_13 filter, called DiReliefF. In testing using datasets with large numbers of features and instances, it was much more efficient and scalable than the original filter.
{ "cite_N": [ "@cite_36", "@cite_13" ], "mid": [ "2785172785", "1808644423" ], "abstract": [ "Feature selection (FS) is a key research area in the machine learning and data mining fields; removing irrelevant and redundant features usually helps to reduce the effort required to process a dataset while maintaining or even improving the processing algorithm’s accuracy. However, traditional algorithms designed for executing on a single machine lack scalability to deal with the increasing amount of data that have become available in the current Big Data era. ReliefF is one of the most important algorithms successfully implemented in many FS applications. In this paper, we present a completely redesigned distributed version of the popular ReliefF algorithm based on the novel Spark cluster computing model that we have called DiReliefF. The effectiveness of our proposal is tested on four publicly available datasets, all of them with a large number of instances and two of them with also a large number of features. Subsets of these datasets were also used to compare the results to a non-distributed implementation of the algorithm. The results show that the non-distributed implementation is unable to handle such large volumes of data without specialized hardware, while our design can process them in a scalable way with much better processing times and memory usage.", "In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
Finally, Eiras- @cite_32 , using four distributed FS algorithms, three of them filters, namely, InfoGain @cite_6 , ReliefF @cite_13 and the CFS @cite_33 , reduce execution times with respect to the original versions. However, in the CFS case, the version of those authors focuses on regression problems where all the features, including the class label, are numerical, with correlations calculated using the Pearson coefficient. A completely different approach is required to design a parallel version for classification problems where correlations are based on the information theory.
{ "cite_N": [ "@cite_33", "@cite_13", "@cite_32", "@cite_6" ], "mid": [ "", "1808644423", "2484311569", "2149706766" ], "abstract": [ "", "In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.", "Abstract Vast amounts of data are generated every day, constituting a volume that is challenging to analyze. Techniques such as feature selection are advisable when tackling large datasets. Among the tools that provide this functionality, Weka is one of the most popular ones, although the implementations it provides struggle when processing large datasets, requiring excessive times to be practical. Parallel processing can help alleviate this problem, effectively allowing users to work with Big Data. The computational power of multicore machines can be harnessed by using multithreading and distributed programming, effectively helping to tackle larger problems. Both these techniques can dramatically speed up the feature selection process allowing users to work with larger datasets. The reimplementation of four popular feature selection algorithms included in Weka is the focus of this work. Multithreaded implementations previously not included in Weka as well as parallel Spark implementations were developed for each algorithm. Experimental results obtained from tests on real-world datasets show that the new versions offer significant reductions in processing times.", "The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions." ] }
1901.11286
2899274556
Abstract Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.
The approach described here can be categorized as a approach that builds on works described elsewhere @cite_28 , @cite_36 , @cite_32 . The fact that their focus was not only on designing an efficient and scalable FS algorithm, but also on preserving the original behaviour (and obtaining the same final results) of traditional filters, means that research focused on those filters is also valid for adapted versions. Another important issue in relation to filters is that, since they are generally more efficient than wrappers, they are often the only feasible option due to the abundance of data. It is worth mentioning that scalable filters could feasibly be included in any of the methods mentioned in the and categories, where an initial filtering step is implemented to improve performance.
{ "cite_N": [ "@cite_28", "@cite_32", "@cite_36" ], "mid": [ "2475596014", "2484311569", "2785172785" ], "abstract": [ "With the advent of large-scale problems, feature selection has become a fundamental preprocessing step to reduce input dimensionality. The minimum-redundancy-maximum-relevance (mRMR) selector is considered one of the most relevant methods for dimensionality reduction due to its high accuracy. However, it is a computationally expensive technique, sharply affected by the number of features. This paper presents fast-mRMR, an extension of mRMR, which tries to overcome this computational burden. Associated with fast-mRMR, we include a package with three implementations of this algorithm in several platforms, namely, CPU for sequential execution, GPU (graphics processing units) for parallel computing, and Apache Spark for distributed computing using big data technologies.", "Abstract Vast amounts of data are generated every day, constituting a volume that is challenging to analyze. Techniques such as feature selection are advisable when tackling large datasets. Among the tools that provide this functionality, Weka is one of the most popular ones, although the implementations it provides struggle when processing large datasets, requiring excessive times to be practical. Parallel processing can help alleviate this problem, effectively allowing users to work with Big Data. The computational power of multicore machines can be harnessed by using multithreading and distributed programming, effectively helping to tackle larger problems. Both these techniques can dramatically speed up the feature selection process allowing users to work with larger datasets. The reimplementation of four popular feature selection algorithms included in Weka is the focus of this work. Multithreaded implementations previously not included in Weka as well as parallel Spark implementations were developed for each algorithm. Experimental results obtained from tests on real-world datasets show that the new versions offer significant reductions in processing times.", "Feature selection (FS) is a key research area in the machine learning and data mining fields; removing irrelevant and redundant features usually helps to reduce the effort required to process a dataset while maintaining or even improving the processing algorithm’s accuracy. However, traditional algorithms designed for executing on a single machine lack scalability to deal with the increasing amount of data that have become available in the current Big Data era. ReliefF is one of the most important algorithms successfully implemented in many FS applications. In this paper, we present a completely redesigned distributed version of the popular ReliefF algorithm based on the novel Spark cluster computing model that we have called DiReliefF. The effectiveness of our proposal is tested on four publicly available datasets, all of them with a large number of instances and two of them with also a large number of features. Subsets of these datasets were also used to compare the results to a non-distributed implementation of the algorithm. The results show that the non-distributed implementation is unable to handle such large volumes of data without specialized hardware, while our design can process them in a scalable way with much better processing times and memory usage." ] }
1901.11524
2950533301
We establish geometric and topological properties of the space of value functions in finite state-action Markov decision processes. Our main contribution is the characterization of the nature of its shape: a general polytope (, 2010). To demonstrate this result, we exhibit several properties of the structural relationship between policies and value functions including the line theorem, which shows that the value functions of policies constrained on all but one state describe a line segment. Finally, we use this novel perspective to introduce visualizations to enhance the understanding of the dynamics of reinforcement learning algorithms.
The dual formulation consists of maximizing the expected return for a given initial state distribution, as a function of the discounted state action visit frequency distribution. Contrary to the primal form, any feasible discounted state action visit frequency distribution maps to an policy @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "2149418961" ], "abstract": [ "We investigate the dual approach to dynamic programming and reinforcement learning, based on maintaining an explicit representation of stationary distributions as opposed to value functions. A significant advantage of the dual approach is that it allows one to exploit well developed techniques for representing, approximating and estimating probability distributions, without running the risks associated with divergent value function estimation. A second advantage is that some distinct algorithms for the average reward and discounted reward case in the primal become unified under the dual. In this paper, we present a modified dual of the standard linear program that guarantees a globally normalized state visit distribution is obtained. With this reformulation, we then derive novel dual forms of dynamic programming, including policy evaluation, policy iteration and value iteration. Moreover, we derive dual formulations of temporal difference learning to obtain new forms of Sarsa and Q-learning. Finally, we scale these techniques up to large domains by introducing approximation, and develop new approximate off-policy learning algorithms that avoid the divergence problems associated with the primal approach. We show that the dual view yields a viable alternative to standard value function based techniques and opens new avenues for solving dynamic programming and reinforcement learning problems" ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
Chase and Kamara @cite_19 introduced the notion of graph encryption while they were presenting structured encryption as a generalization of searchable symmetric encryption (SSE) proposed by Song @cite_23 . They presented schemes for , and on labeled graph-structured data. In all of their proposed schemes, the graph was considered as an adjacency matrix and each entry was encrypted separately using symmetric key encryption. The main idea of their scheme, given a vertex and the corresponding key, the scheme could return adjacent vertices. However, complex query requires complex operation (like addition, subtraction, division etc.) on adjacent matrix which make the scheme unsuitable.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "1539859404", "2147929033" ], "abstract": [ "We consider the problem of encrypting structured data (e.g., a web graph or a social network) in such a way that it can be efficiently and privately queried. For this purpose, we introduce the notion of structured encryption which generalizes previous work on symmetric searchable encryption (SSE) to the setting of arbitrarily-structured data.", "It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies that one has to sacrifice functionality for security. For example, if a client wishes to retrieve only documents containing certain words, it was not previously known how to let the data storage server perform the search and answer the query, without loss of data confidentiality. We describe our cryptographic schemes for the problem of searching on encrypted data and provide proofs of security for the resulting crypto systems. Our techniques have a number of crucial advantages. They are provably secure: they provide provable secrecy for encryption, in the sense that the untrusted server cannot learn anything about the plaintext when only given the ciphertext; they provide query isolation for searches, meaning that the untrusted server cannot learn anything more about the plaintext than the search result; they provide controlled searching, so that the untrusted server cannot search for an arbitrary word without the user's authorization; they also support hidden queries, so that the user may ask the untrusted server to search for a secret word without revealing the word to the server. The algorithms presented are simple, fast (for a document of length n, the encryption and search algorithms only need O(n) stream cipher and block cipher operations), and introduce almost no space and communication overhead, and hence are practical to use today." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
A parallel secure computation framework has been designed and implemented by Nayak @cite_12 . This framework computes functions like histogram, PageRank, matrix factorization etc. To run this algorithms, introduced parallel programming paradigms to secure computation. The parallel and secure execution enables the algorithms to perform even for large datasets. However, they adopt Path-ORAM @cite_21 based techniques which is inefficient if the client has little computation power or the client doesn't uses very large size RAM.
{ "cite_N": [ "@cite_21", "@cite_12" ], "mid": [ "2796933658", "1608459536" ], "abstract": [ "We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme known to date with small client storage. We formally prove that Path ORAM has a O(log N) bandwidth cost for blocks of size B e Ω (log2 N) bits. For such block sizes, Path ORAM is asymptotically better than the best-known ORAM schemes with small client storage. Due to its practicality, Path ORAM has been adopted in the design of secure processors since its proposal.", "We propose introducing modern parallel programming paradigms to secure computation, enabling their secure execution on large datasets. To address this challenge, we present Graph SC, a framework that (i) provides a programming paradigm that allows non-cryptography experts to write secure code, (ii) brings parallelism to such secure implementations, and (iii) meets the need for obliviousness, thereby not leaking any private information. Using Graph SC, developers can efficiently implement an oblivious version of graph-based algorithms (including sophisticated data mining and machine learning algorithms) that execute in parallel with minimal communication overhead. Importantly, our secure version of graph-based algorithms incurs a small logarithmic overhead in comparison with the non-secure parallel version. We build Graph SC and demonstrate, using several algorithms as examples, that secure computation can be brought into the realm of practicality for big data analysis. Our secure matrix factorization implementation can process 1 million ratings in 13 hours, which is a multiple order-of-magnitude improvement over the only other existing attempt, which requires 3 hours to process 16K ratings." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
Sketch-based approximate shortest distance queries over encrypted graph have been studied by Meng @cite_16 . In the pre-processing stage, the client computes the sketches for every vertex that is useful for efficient shortest distance query. Instead of encrypting the graph directly, they encrypted the pre-processed data. Thus, in their scheme, there is no chance of getting information about the original graph.
{ "cite_N": [ "@cite_16" ], "mid": [ "2063575624" ], "abstract": [ "We propose graph encryption schemes that efficiently support approximate shortest distance queries on large-scale encrypted graphs. Shortest distance queries are one of the most fundamental graph operations and have a wide range of applications. Using such graph encryption schemes, a client can outsource large-scale privacy-sensitive graphs to an untrusted server without losing the ability to query it. Other applications include encrypted graph databases and controlled disclosure systems. We propose GRECS (stands for GRaph EnCryption for approximate Shortest distance queries) which includes three oracle encryption schemes that are provably secure against any semi-honest server. Our first construction makes use of only symmetric-key operations, resulting in a computationally-efficient construction. Our second scheme makes use of somewhat-homomorphic encryption and is less computationally-efficient but achieves optimal communication complexity (i.e. uses a minimal amount of bandwidth). Finally, our third scheme is both computationally-efficient and achieves optimal communication complexity at the cost of a small amount of additional leakage. We implemented and evaluated the efficiency of our constructions experimentally. The experiments demonstrate that our schemes are efficient and can be applied to graphs that scale up to 1.6 million nodes and 11 million edges." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
Shen @cite_1 introduced and studied cloud-based approximate in encrypted graphs which finds the shortest distance with a constraint such that the total cost does not exceed a given threshold.
{ "cite_N": [ "@cite_1" ], "mid": [ "2770638201" ], "abstract": [ "Constrained shortest distance (CSD) querying is one of the fundamental graph query primitives, which finds the shortest distance from an origin to a destination in a graph with a constraint that the total cost does not exceed a given threshold. CSD querying has a wide range of applications, such as routing in telecommunications and transportation. With an increasing prevalence of cloud computing paradigm, graph owners desire to outsource their graphs to cloud servers. In order to protect sensitive information, these graphs are usually encrypted before being outsourced to the cloud. This, however, imposes a great challenge to CSD querying over encrypted graphs. Since performing constraint filtering is an intractable task, existing work mainly focuses on unconstrained shortest distance queries. CSD querying over encrypted graphs remains an open research problem. In this paper, we propose Connor , a novel graph encryption scheme that enables approximate CSD querying. Connor is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient. Using Connor , a graph owner can first encrypt privacy-sensitive graphs and then outsource them to the cloud server, achieving the necessary privacy without losing the ability of querying. Extensive experiments with real-world data sets demonstrate the effectiveness and efficiency of the proposed graph encryption scheme." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
Exact distance has been computed on dynamic encrypted graphs in @cite_9 . Similar to our paper, this paper uses a proxy to reduce client-side computation and information leakage to the cloud. In the scheme, adjacency lists are stored in an inverted index. However, in a single query, the scheme leaks all the nodes reachable from the queried vertex which is a lot of information about the graph. For example, if the graph is complete, it reveals the whole graph.
{ "cite_N": [ "@cite_9" ], "mid": [ "2781469847" ], "abstract": [ "In the era of big data, graph databases have become increasingly important for NoSQL technologies, and many systems can be modeled as graphs for semantic queries. Meanwhile, with the advent of cloud computing, data owners are highly motivated to outsource and store their massive potentially-sensitive graph data on remote untrusted servers in an encrypted form, expecting to retain the ability to query over the encrypted graphs." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
A graph encryption scheme, that supports top- @math nearest keyword search queries, has been proposed by Liu @cite_13 . They have made an encrypted index using order preserving encryption for searching. Together with lightweight symmetric key encryption schemes, homomorphic encryption is used to compute on encrypted data.
{ "cite_N": [ "@cite_13" ], "mid": [ "2614426900" ], "abstract": [ "Driven by the growing security demands of data outsourcing applications in sustainable smart cities, encrypting clients’ data has been widely accepted by academia and industry. Data encryptions should be done at the client side before outsourcing, because clouds and edges are not trusted. Therefore, how to properly encrypt data in a way that the encrypted and remotely stored data can still be queried has become a challenging issue. Though keyword searches over encrypted textual data have been extensively studied, approaches for encrypting graph-structured data with support for answering graph queries are still lacking in the literature. In this paper, we specially investigate graph encryption method for an important graph query type, called top-k Nearest Keyword (kNK) searches. We design several indexes to store necessary information for answering queries and guarantee that private information about the graph such as vertex identifiers, keywords and edges are encrypted or excluded. Security and efficiency of our graph encryption scheme are demonstrated by theoretical proofs and experiments on real-world datasets, respectively." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
Besides, Zheng @cite_11 proposed link prediction in decentralized social network preserving the privacy. Their construction split the link score into private and public parts and applied sparse logistic regression to find links based on the content of the users. However, the graph data was not considered to be encrypted in the privacy preserving link prediction schemes.
{ "cite_N": [ "@cite_11" ], "mid": [ "2293003800" ], "abstract": [ "We consider the privacy-preserving link prediction problem in decentralized online social network (OSNs). We formulate the problem as a sparse logistic regression problem and solve it with a novel decentralized two-tier method using alternating direction method of multipliers (ADMM). This method enables end users to collaborate with their online service providers without jeopardizing their data privacy. The method also grants end users fine-grained privacy control to their personal data by supporting arbitrary public private data split. Using real-world data, we show that our method enjoys various advantages including high prediction accuracy, balanced workload, and limited communication overhead. Additionally, we demonstrate that our method copes well with link reconstruction attack." ] }
1901.11308
2914465018
Link Prediction is an important and well-studied problem for social networks. Given a snapshot of a graph, the link prediction problem predicts which new interactions between members are most likely to occur in the near future. As networks grow in size, data owners are forced to store the data in remote cloud servers which reveals sensitive information about the network. The graphs are therefore stored in encrypted form. We study the link prediction problem on encrypted graphs. To the best of our knowledge, this secure link prediction problem has not been studied before. We use the number of common neighbors for prediction. We present three algorithms for the secure link prediction problem. We design prototypes of the schemes and formally prove their security. We execute our algorithms in real-life datasets.
In this paper, we outsource the graph in encrypted form. In most of the previous works, the schemes are designed to perform single specific query like neighbor query ( @cite_19 ), shortest distance query ( @cite_16 @cite_1 @cite_9 ), focused subgraph queries ( @cite_19 ) etc. So, either it is hard to get the information about the source graph ( @cite_16 , @cite_1 ), as they do not support basic queries, or leaks a lot of information for a single query ( @cite_9 ). One trivial approach is that taking different schemes and use all of them to support all types of required queries. In this paper, our target is to get as much information about the graph as possible whenever required with supporting the link prediction query and leak as little information as possible. To the best of our knowledge, the secure link prediction problem has not been studied before. We study issues on link prediction problem in encrypted outsourced data and give three possible solutions overcoming them.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_16", "@cite_1" ], "mid": [ "1539859404", "2781469847", "2063575624", "2770638201" ], "abstract": [ "We consider the problem of encrypting structured data (e.g., a web graph or a social network) in such a way that it can be efficiently and privately queried. For this purpose, we introduce the notion of structured encryption which generalizes previous work on symmetric searchable encryption (SSE) to the setting of arbitrarily-structured data.", "In the era of big data, graph databases have become increasingly important for NoSQL technologies, and many systems can be modeled as graphs for semantic queries. Meanwhile, with the advent of cloud computing, data owners are highly motivated to outsource and store their massive potentially-sensitive graph data on remote untrusted servers in an encrypted form, expecting to retain the ability to query over the encrypted graphs.", "We propose graph encryption schemes that efficiently support approximate shortest distance queries on large-scale encrypted graphs. Shortest distance queries are one of the most fundamental graph operations and have a wide range of applications. Using such graph encryption schemes, a client can outsource large-scale privacy-sensitive graphs to an untrusted server without losing the ability to query it. Other applications include encrypted graph databases and controlled disclosure systems. We propose GRECS (stands for GRaph EnCryption for approximate Shortest distance queries) which includes three oracle encryption schemes that are provably secure against any semi-honest server. Our first construction makes use of only symmetric-key operations, resulting in a computationally-efficient construction. Our second scheme makes use of somewhat-homomorphic encryption and is less computationally-efficient but achieves optimal communication complexity (i.e. uses a minimal amount of bandwidth). Finally, our third scheme is both computationally-efficient and achieves optimal communication complexity at the cost of a small amount of additional leakage. We implemented and evaluated the efficiency of our constructions experimentally. The experiments demonstrate that our schemes are efficient and can be applied to graphs that scale up to 1.6 million nodes and 11 million edges.", "Constrained shortest distance (CSD) querying is one of the fundamental graph query primitives, which finds the shortest distance from an origin to a destination in a graph with a constraint that the total cost does not exceed a given threshold. CSD querying has a wide range of applications, such as routing in telecommunications and transportation. With an increasing prevalence of cloud computing paradigm, graph owners desire to outsource their graphs to cloud servers. In order to protect sensitive information, these graphs are usually encrypted before being outsourced to the cloud. This, however, imposes a great challenge to CSD querying over encrypted graphs. Since performing constraint filtering is an intractable task, existing work mainly focuses on unconstrained shortest distance queries. CSD querying over encrypted graphs remains an open research problem. In this paper, we propose Connor , a novel graph encryption scheme that enables approximate CSD querying. Connor is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient. Using Connor , a graph owner can first encrypt privacy-sensitive graphs and then outsource them to the cloud server, achieving the necessary privacy without losing the ability of querying. Extensive experiments with real-world data sets demonstrate the effectiveness and efficiency of the proposed graph encryption scheme." ] }
1901.11267
2949631225
Lifecycle models for research data are often abstract and simple. This comes at the danger of oversimplifying the complex concepts of research data management. The analysis of 90 different lifecycle models lead to two approaches to assess the quality of these models. While terminological issues make direct comparisons of models hard, an empirical evaluation seems possible.
@cite_8 shows an approach very similar to ours: Based on a survey of lifecycle models, an abstract data lifecycle model is derived and a classification scheme is developed. In contrast to @cite_8 , we do not define a lifecycle model but a common scheme shared by all found lifecycle models. One of features by which @cite_8 classifies, is the distinction between prescriptive and descriptive models, which comes very close to our proposal to classify along the purpose the model was designed for. Our method is more focused on evaluation and the resulting classification is therefore more fine-grained with regard to that. @cite_8 provides more classifications of features, of which some are irrelevant for evaluation (e.g. the distinction betwen homogeneous and heterogeneous lifecycles).
{ "cite_N": [ "@cite_8" ], "mid": [ "1639206162" ], "abstract": [ "The Semantic Web, especially in the light of the current focus on its nature as a Web of Data, is a data-centric system, and arguably the largest such system in existence. Data is being created, published, exported, imported, used, transformed and re-used, by different parties and for different purposes. Together, these actions form a lifecycle of data on the Semantic Web. Understanding this lifecycle will help to better understand the nature of data on the SW, to explain paradigm shifts, to compare the functionality of different platforms, to aid the integration of previously disparate implementation efforts or to position various actors on the SW and relate them to each other. However, while conceptualisations of many aspects of the SW exist, no exhaustive data lifecycle has been proposed. This paper proposes a data lifecycle model for the Semantic Web by first looking outward, and performing an extensive survey of lifecycle models in other data-centric domains, such as digital libraries, multimedia, eLearning, knowledge and Web content management or ontology development. For each domain, an extensive list of models is taken from the literature, and then described and analysed in terms of its different phases, actor roles and other characteristics. By contrasting and comparing the existing models, a meta vocabulary of lifecycle models for data-centric systems — the Abstract Data Lifecycle Model, or ADLM — is developed. In particular, a common set of lifecycle phases, lifecycle features and lifecycle roles is established, as well as additional actor features and generic features of data and metadata. This vocabulary now provides a tool to describe each individual model, relate them to each other, determine similarities and overlaps and eventually establish a new such model for the Semantic Web." ] }
1901.11267
2949631225
Lifecycle models for research data are often abstract and simple. This comes at the danger of oversimplifying the complex concepts of research data management. The analysis of 90 different lifecycle models lead to two approaches to assess the quality of these models. While terminological issues make direct comparisons of models hard, an empirical evaluation seems possible.
@cite_10 , @cite_1 and @cite_7 are alike to @cite_8 in the approach to review existing models and deriving an own lifecycle model based on a gap analysis. None of the three publications offer generic and empirical evaluation criteria or a metamodel for the existing models. Their lifecycle model is designed to supersede the existing approaches for a specific context.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_7", "@cite_8" ], "mid": [ "2558036245", "", "2407478184", "1639206162" ], "abstract": [ "A huge amount of data is constantly being produced in the world. Data coming from the IoT, from scientific simulations, or from any other field of the eScience, are accumulated over historical data sets and set up the seed for future Big Data processing, with the final goal to generate added value and discover knowledge. In such computing processes, data are the main resource, however, organizing and managing data during their entire life cycle becomes a complex research topic. As part of this, Data LifeCycle (DLC) models have been proposed to efficiently organize large and complex data sets, from creation to consumption, in any field, and any scale, for an effective data usage and big data exploitation. 2. Several DLC frameworks can be found in the literature, each one defined for specific environments and scenarios. However, we realized that there is no global and comprehensive DLC model to be easily adapted to different scientific areas. For this reason, in this paper we describe the Comprehensive Scenario Agnostic Data LifeCycle (COSA-DLC) model, a DLC model which: i) is proved to be comprehensive as it addresses the 6Vs challenges (namely Value, Volume, Variety, Velocity, Variability and Veracity, and ii), it can be easily adapted to any particular scenario and, therefore, fit the requirements of a specific scientific field. In this paper we also include two use cases to illustrate the ease of the adaptation in different scenarios. We conclude that the comprehensive scenario agnostic DLC model provides several advantages, such as facilitating global data management, organization and integration, easing the adaptation to any kind of scenario, guaranteeing good data quality levels and, therefore, saving design time and efforts for the scientific and industrial communities.", "", "As science becomes more data-intensive and collaborative, researchers increasingly use larger and more complex data to answer research questions. The capacity of storage infrastructure, the increased sophistication and deployment of sensors, the ubiquitous availability of computer clusters, the development of new analysis techniques, and larger collaborations allow researchers to address grand societ al challenges in a way that is unprecedented. In parallel, research data repositories have been built to host research data in response to the requirements of sponsors that research data be publicly available. Libraries are re-inventing themselves to respond to a growing demand to manage, store, curate and preserve the data produced in the course of publicly funded research. As librarians and data managers are developing the tools and knowledge they need to meet these new expectations, they inevitably encounter conversations around Big Data. This paper explores definitions of Big Data that have coalesced in the last decade around four commonly mentioned characteristics: volume, variety, velocity, and veracity. We highlight the issues associated with each characteristic, particularly their impact on data management and curation. We use the methodological framework of the data life cycle model, assessing two models developed in the context of Big Data projects and find them lacking. We propose a Big Data life cycle model that includes activities focused on Big Data and more closely integrates curation with the research life cycle. These activities include planning, acquiring, preparing, analyzing, preserving, and discovering, with describing the data and assuring quality being an integral part of each activity. We discuss the relationship between institutional data curation repositories and new long-term data resources associated with high performance computing centers, and reproducibility in computational science. We apply this model by mapping the four characteristics of Big Data outlined above to each of the activities in the model. This mapping produces a set of questions that practitioners should be asking in a Big Data project", "The Semantic Web, especially in the light of the current focus on its nature as a Web of Data, is a data-centric system, and arguably the largest such system in existence. Data is being created, published, exported, imported, used, transformed and re-used, by different parties and for different purposes. Together, these actions form a lifecycle of data on the Semantic Web. Understanding this lifecycle will help to better understand the nature of data on the SW, to explain paradigm shifts, to compare the functionality of different platforms, to aid the integration of previously disparate implementation efforts or to position various actors on the SW and relate them to each other. However, while conceptualisations of many aspects of the SW exist, no exhaustive data lifecycle has been proposed. This paper proposes a data lifecycle model for the Semantic Web by first looking outward, and performing an extensive survey of lifecycle models in other data-centric domains, such as digital libraries, multimedia, eLearning, knowledge and Web content management or ontology development. For each domain, an extensive list of models is taken from the literature, and then described and analysed in terms of its different phases, actor roles and other characteristics. By contrasting and comparing the existing models, a meta vocabulary of lifecycle models for data-centric systems — the Abstract Data Lifecycle Model, or ADLM — is developed. In particular, a common set of lifecycle phases, lifecycle features and lifecycle roles is established, as well as additional actor features and generic features of data and metadata. This vocabulary now provides a tool to describe each individual model, relate them to each other, determine similarities and overlaps and eventually establish a new such model for the Semantic Web." ] }
1901.11267
2949631225
Lifecycle models for research data are often abstract and simple. This comes at the danger of oversimplifying the complex concepts of research data management. The analysis of 90 different lifecycle models lead to two approaches to assess the quality of these models. While terminological issues make direct comparisons of models hard, an empirical evaluation seems possible.
@cite_1 and @cite_7 both propose a lifecycle model for Big Data. Although they model the same phenomena, the models are not similar. While @cite_7 does not describe evaluation criteria of the model, @cite_1 proposes the 6Vs of Big Data (Value, Volume, Variety, Velocity, Variability, Veracity) as a base to evaluate data lifecycle models in the context of Big Data. This evaluation carried out in @cite_17 is also applied to evaluate other data lifecycle models for their aptness to describe Big Data challenges. This evaluation is the most rigorous we found in the literature, but it is limited to the context of Big Data and itself is based on a theoretical concept instead of empirical evaluation.
{ "cite_N": [ "@cite_1", "@cite_7", "@cite_17" ], "mid": [ "2558036245", "2407478184", "2591999683" ], "abstract": [ "A huge amount of data is constantly being produced in the world. Data coming from the IoT, from scientific simulations, or from any other field of the eScience, are accumulated over historical data sets and set up the seed for future Big Data processing, with the final goal to generate added value and discover knowledge. In such computing processes, data are the main resource, however, organizing and managing data during their entire life cycle becomes a complex research topic. As part of this, Data LifeCycle (DLC) models have been proposed to efficiently organize large and complex data sets, from creation to consumption, in any field, and any scale, for an effective data usage and big data exploitation. 2. Several DLC frameworks can be found in the literature, each one defined for specific environments and scenarios. However, we realized that there is no global and comprehensive DLC model to be easily adapted to different scientific areas. For this reason, in this paper we describe the Comprehensive Scenario Agnostic Data LifeCycle (COSA-DLC) model, a DLC model which: i) is proved to be comprehensive as it addresses the 6Vs challenges (namely Value, Volume, Variety, Velocity, Variability and Veracity, and ii), it can be easily adapted to any particular scenario and, therefore, fit the requirements of a specific scientific field. In this paper we also include two use cases to illustrate the ease of the adaptation in different scenarios. We conclude that the comprehensive scenario agnostic DLC model provides several advantages, such as facilitating global data management, organization and integration, easing the adaptation to any kind of scenario, guaranteeing good data quality levels and, therefore, saving design time and efforts for the scientific and industrial communities.", "As science becomes more data-intensive and collaborative, researchers increasingly use larger and more complex data to answer research questions. The capacity of storage infrastructure, the increased sophistication and deployment of sensors, the ubiquitous availability of computer clusters, the development of new analysis techniques, and larger collaborations allow researchers to address grand societ al challenges in a way that is unprecedented. In parallel, research data repositories have been built to host research data in response to the requirements of sponsors that research data be publicly available. Libraries are re-inventing themselves to respond to a growing demand to manage, store, curate and preserve the data produced in the course of publicly funded research. As librarians and data managers are developing the tools and knowledge they need to meet these new expectations, they inevitably encounter conversations around Big Data. This paper explores definitions of Big Data that have coalesced in the last decade around four commonly mentioned characteristics: volume, variety, velocity, and veracity. We highlight the issues associated with each characteristic, particularly their impact on data management and curation. We use the methodological framework of the data life cycle model, assessing two models developed in the context of Big Data projects and find them lacking. We propose a Big Data life cycle model that includes activities focused on Big Data and more closely integrates curation with the research life cycle. These activities include planning, acquiring, preparing, analyzing, preserving, and discovering, with describing the data and assuring quality being an integral part of each activity. We discuss the relationship between institutional data curation repositories and new long-term data resources associated with high performance computing centers, and reproducibility in computational science. We apply this model by mapping the four characteristics of Big Data outlined above to each of the activities in the model. This mapping produces a set of questions that practitioners should be asking in a Big Data project", "There is a vast amount of data being generated every day in the world, coming from a variety of sources, with different formats, quality levels, etc. This new data, together with the archived historical data, constitute the seed for future knowledge discovery and value generation in several fields of eScience. Discovering value from data is a complex computing process where data is the key resource, not only during its processing, but also during its entire life cycle. However, there is still a huge concern about how to organize and manage this data in all fields, and at all scales, for efficient usage and exploitation during all data life cycles. Although several specific Data LifeCycle (DLC) models have been recently defined for particular scenarios, we argue that there is no global and comprehensive DLC framework to be widely used in different fields. For this reason, in this paper we present and describe a comprehensive scenario agnostic Data LifeCycle (COSA-DLC) model successfully addressing all challenges included in the 6Vs, namely Value, Volume, Variety, Velocity, Variability and Veracity, not tailored to any specific environment, but easy to be adapted to fit the requirements of any particular field. We conclude that a comprehensive scenario agnostic DLC model provides several advantages, such as facilitating global data organization and integration, easing the adaptation to any kind of scenario, guaranteeing good quality data levels, and helping save design time and efforts for the research and industrial communities." ] }
1901.11267
2949631225
Lifecycle models for research data are often abstract and simple. This comes at the danger of oversimplifying the complex concepts of research data management. The analysis of 90 different lifecycle models lead to two approaches to assess the quality of these models. While terminological issues make direct comparisons of models hard, an empirical evaluation seems possible.
@cite_20 provides a scoped review of 301 articles and 10 companion documents discussing research data management practices in academic institutions between 1995 and 2016. The review is not limited to, but includes publications discussing data lifecycle models. The discussion includes the observation, that of the papers reviewed, only a view provided empirical evidence for their results, which is in accordance to our findings. The study classifies the papers based on the UK data lifecycle https: www.ukdataservice.ac.uk manage-data lifecycle , which fortunately is preserved as an attachement to this paper (its "official" version has changed since the original publication).
{ "cite_N": [ "@cite_20" ], "mid": [ "2617597628" ], "abstract": [ "Objective The purpose of this study is to describe the volume, topics, and methodological nature of the existing research literature on research data management in academic institutions. Materials and methods We conducted a scoping review by searching forty literature databases encompassing a broad range of disciplines from inception to April 2016. We included all study types and data extracted on study design, discipline, data collection tools, and phase of the research data lifecycle. Results We included 301 articles plus 10 companion reports after screening 13,002 titles and abstracts and 654 full-text articles. Most articles (85 ) were published from 2010 onwards and conducted within the sciences (86 ). More than three-quarters of the articles (78 ) reported methods that included interviews, cross-sectional, or case studies. Most articles (68 ) included the Giving Access to Data phase of the UK Data Archive Research Data Lifecycle that examines activities such as sharing data. When studies were grouped into five dominant groupings (Stakeholder, Data, Library, Tool Device, and Publication), data quality emerged as an integral element. Conclusion Most studies relied on self-reports (interviews, surveys) or accounts from an observer (case studies) and we found few studies that collected empirical evidence on activities amongst data producers, particularly those examining the impact of research data management interventions. As well, fewer studies examined research data management at the early phases of research projects. The quality of all research outputs needs attention, from the application of best practices in research data management studies, to data producers depositing data in repositories for long-term use." ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
In the absence of external parallel data, one polylingual DSM which has recently proved worthy (and that we use as a baseline in our experiments) is (LRI -- @cite_2 ), the polylingual extension of the (RI) method @cite_22 . RI is a context-counting model belonging to the family of random projection methods, and is considered a cheaper approximation of LSA @cite_47 . LRI is designed so that the orthogonality of the projection base is maximized, which allows to preserve sparsity and maximize the contribution of the information conveyed by the features shared across languages.
{ "cite_N": [ "@cite_47", "@cite_22", "@cite_2" ], "mid": [ "188912188", "2070589943", "2549102774" ], "abstract": [ "Word space models enjoy considerable attention in current research on semantic indexing. Most notably, Latent Semantic Analysis Indexing (LSA LSI; , 1990, Landauer & Dumais, 1997) has become a household name in information access research, and deservedly so; LSA has proven its mettle in numerous applications, and has more or less spawned an entire research field since its introduction around 1990. Today, there is a rich flora of word space models available, and there are numerous publications that report exceptional results in many different applications, including information retrieval (, 1988), word sense disambiguation (Schutze, 1993), various semantic knowledge tests (, 1995, Karlgren & Sahlgren, 2001), and text categorization (Sahlgren & Karlgren, 2004). This paper introduces the Random Indexing word space approach, which presents an efficient, scalable and incremental alternative to standard word space methods. The paper is organized as follows: in the next section, we review the basic word space methodology. We then look at some of the problems that are inherent in the basic methodology, and also review some of the solutions that have been proposed in the literature. In the final section, we introduce the Random Indexing word space approach, and briefly review some of the experimental results that have been achieved with Random Indexing.", "This paper investigates the use of concept-based representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.", "" ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
Another method that requires external multilingual resources (specifically: a word translation oracle) is (CL-SCL -- @cite_39 ). CL-SCL relies on solving auxiliary prediction problems, which consist in discovering hidden correlations between terms in a language. This is achieved by binary classifiers trained to predict the presence of highly discriminative terms ( pivots'') given the other terms in the document. The cross-lingual aspect is addressed by imposing that pivot terms are aligned (i.e., translations of each other) across languages, which requires a word translation oracle. A stronger, more recent variant of CL-SCL (which we also compare against in our experiments) is (DCI -- @cite_44 ). DCI derives term representations in a vector space common to all languages where each dimension reflects its distributional correspondence (as quantified by a distributional correspondence function'') to a pivot. Machine Translation (MT) represents an appealing tool to solve PLC, and several PLC methods are indeed based on the use of MT services @cite_40 @cite_42 . However, the drawback of these methods is reduced generality, since it is not always the case that quality MT tools are both (i) available for the required language combinations, and (ii) free to use.
{ "cite_N": [ "@cite_44", "@cite_40", "@cite_42", "@cite_39" ], "mid": [ "2287612586", "2099031744", "2167660864", "2171068337" ], "abstract": [ "Domain Adaptation (DA) techniques aim at enabling machine learning methods learn effective classifiers for a \"target\" domain when the only available training data belongs to a different \"source\" domain. In this paper we present the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. DCI derives term representations in a vector space common to both domains where each dimension re ects its distributional correspondence to a pivot, i.e., to a highly predictive term that behaves similarly across domains. Term correspondence is quantified by means of a distributional correspondence function (DCF). We propose a number of efficient DCFs that are motivated by the distributional hypothesis, i.e., the hypothesis according to which terms with similar meaning tend to have similar distributions in text. Experiments show that DCI obtains better performance than current state-of-the-art techniques for cross-lingual and cross-domain sentiment classification. DCI also brings about a significantly reduced computational cost, and requires a smaller amount of human intervention. As a final contribution, we discuss a more challenging formulation of the domain adaptation problem, in which both the cross-domain and cross-lingual dimensions are tackled simultaneously.", "Due to the globalization on the Web, many companies and institutions need to efficiently organize and search repositories containing multilingual documents. The management of these heterogeneous text collections increases the costs significantly because experts of different languages are required to organize these collections. Cross-language text categorization can provide techniques to extend existing automatic classification systems in one language to new languages without requiring additional intervention of human experts. In this paper, we propose a learning algorithm based on the EM scheme which can be used to train text classifiers in a multilingual environment. In particular, in the proposed approach, we assume that a predefined category set and a collection of labeled training data is available for a given language L sub 1 . A classifier for a different language L sub 2 is trained by translating the available labeled training set for L sub 1 to L sub 2 and by using an additional set of unlabeled documents from L sub 2 . This technique allows us to extract correct statistical properties of the language L sub 2 which are not completely available in automatically translated examples, because of the different characteristics of language L sub 1 and of the approximation of the translation process. Our experimental results show that the performance of the proposed method is very promising when applied on a test document set extracted from newsgroups in English and Italian.", "The lack of Chinese sentiment corpora limits the research progress on Chinese sentiment classification. However, there are many freely available English sentiment corpora on the Web. This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data. Machine translation services are used for eliminating the language gap between the training set and test set, and English features and Chinese features are considered as two independent views of the classification problem. We propose a cotraining approach to making use of unlabeled Chinese data. Experimental results show the effectiveness of the proposed approach, which can outperform the standard inductive classifiers and the transductive classifiers.", "We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce task-specific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of inter-language correspondence modeling. We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas." ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
Approaches to PLC based on deep learning focus on defining representations based on word embeddings which capture the semantic regularities in language while at the same time being aligned across languages. In order to produce aligned representations, though, deep learning approaches typically require the availability of external parallel corpora @cite_37 @cite_46 , bi-lingual lexicons @cite_34 , or machine translation tools @cite_3 . Recently, proposed a method to align monolingual word embedding spaces (as those produced by, e.g., Word2Vec @cite_48 ) from different languages without requiring parallel data. To this aim, @cite_7 proposed an adversarial training process in which a (in charge of mapping the source embeddings onto the target space) is trained to fool a from distinguishing the provenance of the embeddings, i.e., from understanding whether the embeddings it receives as input come from the (transformed) source or from the target space. After that, the mapping is refined by means of unsupervised techniques. Despite operating without parallel resources, @cite_7 obtained state-of-the-art multilingual mappings, which they later made publicly available https: github.com facebookresearch MUSE and which we use as a further baseline in our experiments of Section .
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_48", "@cite_3", "@cite_46", "@cite_34" ], "mid": [ "2251033195", "2762484717", "2950133940", "2340588715", "2952037945", "342285082" ], "abstract": [ "Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.", "State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "We propose a Polylingual text Embedding (PE) strategy, that learns a language independent representation of texts using Neu-ral Networks. We study the effects of bilingual representation learning for text classification and we empirically show that the learned representations achieve better classification performance compared to traditional bag-of-words and other monolingual distributed representations. The performance gains are more significant in the interesting case where only few labeled examples are available for training the classifiers.", "We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.", "The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques." ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
Funnelling is reminiscent of the (a.k.a. stacking'') method for ensemble learning @cite_36 . Let us discuss their commonalities and differences.
{ "cite_N": [ "@cite_36" ], "mid": [ "28412257" ], "abstract": [ "This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory." ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
Common to stacking and funnelling is the presence of an ensemble of @math base classifiers, typically trained on traditional'' vectorial representations, and the presence of a single meta-classifier that operates on vectors of base-classifier outputs. Common to stacking and is also the use of @math -fold cross-validation in order to generate the vectors of base-classifier outputs that are used to train the meta-classifier. (Variants of stacking in which @math -fold cross-validation is not used, and thus akin to , also exist @cite_26 .)
{ "cite_N": [ "@cite_26" ], "mid": [ "2171622762" ], "abstract": [ "We evaluate empirically a scheme for combining classifiers, known as stacked generalization, in the context of anti-spam filtering, a novel cost-sensitive application of text categorization. Unsolicited commercial e- mail, or \"spam\", floods mailboxes, causing frustration, wasting bandwidth, and exposing minors to unsuitable content. Using a public corpus, we show that stacking can improve the efficiency of automatically induced anti-spam filters, and that such filters can be used in real- life applications." ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
However, a key difference between the two methods is that stacking (like other ensemble methods such as bagging @cite_10 and boosting @cite_35 ) deals with ( homogeneous'') scenarios in which all training documents can in principle be represented in the same feature space and can thus concur to training the same classifier; in turn, this classifier can be used for classifying all the unlabelled documents. In stacking, the base classifiers sometimes differ in terms of the learning algorithm used to train them @cite_26 @cite_41 , or in terms of the subsets of the training set which are used for training them @cite_14 . In other words, in these scenarios setting up an ensemble is a choice, and not a necessity. It is instead a necessity in the ( heterogeneous'') scenarios which funnelling deals with, where labelled documents of different types (in our case: languages) could otherwise concur in training the same classifier (since they lie in different feature spaces), and where unlabelled documents could not (for analogous reasons) be classified by the same classifier.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_41", "@cite_10" ], "mid": [ "2112076978", "1567125377", "2171622762", "1645816215", "" ], "abstract": [ "In an earlier paper, we introduced a new \"boosting\" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a \"pseudo-loss\" which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's \"bagging\" method when used to aggregate various classifiers (including decision trees and single attribute- value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.", "In this paper, we describe a general approach to scaling data mining applications that we have come to call meta-learning. Meta-Learning refers to a general strategy that seeks to learn how to combine a number of separate learning processes in an intelligent fashion. We desire a meta-learning architecture that exhibits two key behaviors. First, the meta-learning strategy must produce an accurate final classification system. This means that a meta-learning architecture must produce a final outcome that is at least as accurate as a conventional learning algorithm applied to all available data. Second, it must be fast, relative to an individual sequential learning algorithm when applied to massive databases of examples, and operate in a reasonable amount of time. This paper focussed primarily on issues related to the accuracy and efficacy of meta-learning as a general strategy. A number of empirical results are presented demonstrating that meta-learning is technically feasible in wide-area, network computing environments.", "We evaluate empirically a scheme for combining classifiers, known as stacked generalization, in the context of anti-spam filtering, a novel cost-sensitive application of text categorization. Unsolicited commercial e- mail, or \"spam\", floods mailboxes, causing frustration, wasting bandwidth, and exposing minors to unsuitable content. Using a public corpus, we show that stacking can improve the efficiency of automatically induced anti-spam filters, and that such filters can be used in real- life applications.", "Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a 'black art' in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We find that best results are obtained when the higher-level model combines the confidence (and not just the predictions) of the lower-level ones. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms for classification tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging.", "" ] }
1901.11459
2963967134
Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when “naively” classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle “multilabel” CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.
It certainly that exist in multilabel settings @cite_28 @cite_5 @cite_18 , which is not possible when (as customarily done) a multilabel classification task is solved as @math independent binary classification problems. In fact, for an unlabelled document @math the meta-classifier receives @math inputs from the base classifier which has classified @math , and returns @math outputs, which means that the input for class @math has a potential impact on the output for class @math , for every choice of @math and @math . For instance, the fact that for @math the posterior probability for class Skiing is high might bring additional evidence that @math belongs to class Snowboarding ; this could be the result of several training documents labelled by Snowboarding having, in their @math vectors, a high value for class Skiing .
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_18" ], "mid": [ "66588809", "2108672713", "2146241755" ], "abstract": [ "In this paper we present methods of enhancing existing discriminative classifiers for multi-labeled predictions. Discriminative methods like support vector machines perform very well for uni-labeled text classification tasks. Multi-labeled classification is a harder task subject to relatively less attention. In the multi-labeled setting, classes are often related to each other or part of a is-a hierarchy. We present a new technique for combining text features and features indicating relationships between classes, which can be used with any discriminative algorithm. We also present two enhancements to the margin of SVMs for building better models in the presence of overlapping classes. We present results of experiments on real world text benchmark datasets. Our new methods beat accuracy of existing methods with statistically significant improvements.", "Music listeners frequently use words to describe music. Personalized music recommendation systems such as Last.fm and Pandora rely on manual annotations (tags) as a mechanism for querying and navigating large music collections. A well-known issue in such recommendation systems is known as the cold-start problem: it is not possible to recommend new songs tracks until those songs tracks have been manually annotated. Automatic tag annotation based on content analysis is a potential solution to this problem and has recently been gaining attention. We describe how stacked generalization can be used to improve the performance of a state-of-the-art automatic tag annotation system for music based on audio content analysis and report results on two publicly available datasets.", "Multi-label classification methods are increasingly required by modern applications, such as protein function classification, music categorization, and semantic scene classification. This article introduces the task of multi-label classification, organizes the sparse related literature into a structured presentation and performs comparative experimental results of certain multilabel classification methods. It also contributes the definition of concepts for the quantification of the multi-label nature of a data set." ] }
1901.11382
2950001871
In the big data era, the impetus to digitize the vast reservoirs of data trapped in unstructured scanned documents such as invoices, bank documents and courier receipts has gained fresh momentum. The scanning process often results in the introduction of artifacts such as background noise, blur due to camera motion, watermarkings, coffee stains, or faded text. These artifacts pose many readability challenges to current text recognition algorithms and significantly degrade their performance. Existing learning based denoising techniques require a dataset comprising of noisy documents paired with cleaned versions. In such scenarios, a model can be trained to generate clean documents from noisy versions. However, very often in the real world such a paired dataset is not available, and all we have for training our denoising model are unpaired sets of noisy and clean images. This paper explores the use of GANs to generate denoised versions of the noisy documents. In particular, where paired information is available, we formulate the problem as an image-to-image translation task i.e, translating a document from noisy domain ( i.e., background noise, blurred, faded, watermarked ) to a target clean document using Generative Adversarial Networks (GAN). However, in the absence of paired images for training, we employed CycleGAN which is known to learn a mapping between the distributions of the noisy images to the denoised images using unpaired data to achieve image-to-image translation for cleaning the noisy documents. We compare the performance of CycleGAN for document cleaning tasks using unpaired images with a Conditional GAN trained on paired data from the same dataset. Experiments were performed on a public document dataset on which different types of noise were artificially induced, results demonstrate that CycleGAN learns a more robust mapping from the space of noisy to clean documents.
Generative adversarial Network (GAN) @cite_6 is the idea that has taken deep learning by storm. It employs adversarial training which essentially means pitting two neural networks against each other. One is a generator while the other is a discriminator, where the former aims at producing data that are indistinguishable from real data while the latter tries to distinguish between real and fake data. The process eventually yields a generator with the ability to do a plethora of tasks efficiently such as image-to-image generation. Other notable applications where GANs have established their supermacy are representation learning, image editing, art generation, music generation etc. @cite_20 @cite_7 @cite_21 @cite_3 @cite_17 .
{ "cite_N": [ "@cite_7", "@cite_21", "@cite_3", "@cite_6", "@cite_20", "@cite_17" ], "mid": [ "2766910785", "2797030127", "2762941833", "1710476689", "2434741482", "2963684088" ], "abstract": [ "It is known that the inconsistent distribution and representation of different modalities, such as image and text, cause the heterogeneity gap that makes it challenging to correlate such heterogeneous data. Generative adversarial networks (GANs) have shown its strong ability of modeling data distribution and learning discriminative representation, existing GANs-based works mainly focus on generative problem to generate new data. We have different goal, aim to correlate heterogeneous data, by utilizing the power of GANs to model cross-modal joint distribution. Thus, we propose Cross-modal GANs to learn discriminative common representation for bridging heterogeneity gap. The main contributions are: (1) Cross-modal GANs architecture is proposed to model joint distribution over data of different modalities. The inter-modality and intra-modality correlation can be explored simultaneously in generative and discriminative models. Both of them beat each other to promote cross-modal correlation learning. (2) Cross-modal convolutional autoencoders with weight-sharing constraint are proposed to form generative model. They can not only exploit cross-modal correlation for learning common representation, but also preserve reconstruction information for capturing semantic consistency within each modality. (3) Cross-modal adversarial mechanism is proposed, which utilizes two kinds of discriminative models to simultaneously conduct intra-modality and inter-modality discrimination. They can mutually boost to make common representation more discriminative by adversarial training process. To the best of our knowledge, our proposed CM-GANs approach is the first to utilize GANs to perform cross-modal common representation learning. Experiments are conducted to verify the performance of our proposed approach on cross-modal retrieval paradigm, compared with 10 methods on 3 cross-modal datasets.", "Data of different modalities generally convey complimentary but heterogeneous information, and a more discriminative representation is often preferred by combining multiple data modalities like the RGB and infrared features. However in reality, obtaining both data channels is challenging due to many limitations. For example, the RGB surveillance cameras are often restricted from private spaces, which is in conflict with the need of abnormal activity detection for personal security. As a result, using partial data channels to build a full representation of multi-modalities is clearly desired. In this paper, we propose a novel Partial-modal Generative Adversarial Networks (PM-GANs) that learns a full-modal representation using data from only partial modalities. The full representation is achieved by a generated representation in place of the missing data channel. Extensive experiments are conducted to verify the performance of our proposed method on action recognition, compared with four state-of-the-art methods. Meanwhile, a new Infrared-Visible Dataset for action recognition is introduced, and will be the first publicly available action dataset that contains paired infrared and visible spectrum.", "With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks. However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model @math and a discriminative model @math . We treat @math as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. @math can produce numerous images that are similar to the training data; therefore, @math can learn better representations of remotely sensed images using the training data provided by @math . The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods.", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.", "Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ] }
1901.11382
2950001871
In the big data era, the impetus to digitize the vast reservoirs of data trapped in unstructured scanned documents such as invoices, bank documents and courier receipts has gained fresh momentum. The scanning process often results in the introduction of artifacts such as background noise, blur due to camera motion, watermarkings, coffee stains, or faded text. These artifacts pose many readability challenges to current text recognition algorithms and significantly degrade their performance. Existing learning based denoising techniques require a dataset comprising of noisy documents paired with cleaned versions. In such scenarios, a model can be trained to generate clean documents from noisy versions. However, very often in the real world such a paired dataset is not available, and all we have for training our denoising model are unpaired sets of noisy and clean images. This paper explores the use of GANs to generate denoised versions of the noisy documents. In particular, where paired information is available, we formulate the problem as an image-to-image translation task i.e, translating a document from noisy domain ( i.e., background noise, blurred, faded, watermarked ) to a target clean document using Generative Adversarial Networks (GAN). However, in the absence of paired images for training, we employed CycleGAN which is known to learn a mapping between the distributions of the noisy images to the denoised images using unpaired data to achieve image-to-image translation for cleaning the noisy documents. We compare the performance of CycleGAN for document cleaning tasks using unpaired images with a Conditional GAN trained on paired data from the same dataset. Experiments were performed on a public document dataset on which different types of noise were artificially induced, results demonstrate that CycleGAN learns a more robust mapping from the space of noisy to clean documents.
Image-to-image translation is the task of mapping images in source domain to images in target domain such as converting sketches into photographs, grayscale images to color images etc. The aim is to generate the target distribution given the source distribution. Prior work in the field of GANs such as Conditional GAN @cite_4 forces the image produced by generator to be conditioned on the output which allows for optimal translations. However, earlier GANs require one-to-one mapping of images between source and target domain i.e., a paired dataset. In case of documents, it is not possible to always have cleaned documents corresponding to each noisy document. This persuaded us to explore unpaired image-to-image translation methods, e.g. Dual-GAN @cite_19 which uses dual learning and CycleGAN @cite_1 which makes use of cyclic-consistency loss to achieve unpaired image-to-image translation.
{ "cite_N": [ "@cite_1", "@cite_19", "@cite_4" ], "mid": [ "2962793481", "", "2125389028" ], "abstract": [ "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels." ] }
1901.11382
2950001871
In the big data era, the impetus to digitize the vast reservoirs of data trapped in unstructured scanned documents such as invoices, bank documents and courier receipts has gained fresh momentum. The scanning process often results in the introduction of artifacts such as background noise, blur due to camera motion, watermarkings, coffee stains, or faded text. These artifacts pose many readability challenges to current text recognition algorithms and significantly degrade their performance. Existing learning based denoising techniques require a dataset comprising of noisy documents paired with cleaned versions. In such scenarios, a model can be trained to generate clean documents from noisy versions. However, very often in the real world such a paired dataset is not available, and all we have for training our denoising model are unpaired sets of noisy and clean images. This paper explores the use of GANs to generate denoised versions of the noisy documents. In particular, where paired information is available, we formulate the problem as an image-to-image translation task i.e, translating a document from noisy domain ( i.e., background noise, blurred, faded, watermarked ) to a target clean document using Generative Adversarial Networks (GAN). However, in the absence of paired images for training, we employed CycleGAN which is known to learn a mapping between the distributions of the noisy images to the denoised images using unpaired data to achieve image-to-image translation for cleaning the noisy documents. We compare the performance of CycleGAN for document cleaning tasks using unpaired images with a Conditional GAN trained on paired data from the same dataset. Experiments were performed on a public document dataset on which different types of noise were artificially induced, results demonstrate that CycleGAN learns a more robust mapping from the space of noisy to clean documents.
Very few attempts have been made in past for removing watermarks from images. Authors in @cite_15 proposed to use image inpainting to recover the original image. However, the method developed by @cite_24 detects the watermark using statistical methods and subsequently, removes it using image inpainting. To the best of our knowledge, we did not find any work on defading of images.
{ "cite_N": [ "@cite_24", "@cite_15" ], "mid": [ "2783930095", "2766850999" ], "abstract": [ "This paper introduces a technique to remove visible watermark automatically using image inpainting algorithms. The pending images which need watermark re-moval are assumed to have same resolution and watermark region and we will show this assumption is reasonable. Our proposed technique includes two basic step. The first step is detecting the watermark region, we propose a statistical method to detect the watermark region. Thresholding algorithm for segmentation proceeds at the accumulation image which is calculated by accumulation of the gray-scale maps of pending images. The second step is removing the watermark using image inpainting algorithms. Since watermarks are usually with large re-gion areas, an exemplar-based inpainting algorithm through investigating the sparsity of natural image patches is proposed for this step. Experiments were im-plemented in a test image set of 889 images downloaded from a shopping web-site with the resolution of 800∗800 and same watermark regions.", "Abstract In this paper, we propose two schemes for visible-watermark removal and reversible image recovery. In the first scheme, we consider the scenario for the image generated by a specific visible (not completely reversible) watermarking algorithm (2017). A run-length coding based method is utilized to compress the difference between the preliminary recovered image and original image. After embedding the difference information invisibly and reversibly, the final embedded image can be exactly recovered to its original version after visible-watermark removal, which avoids the problem of overflow and underflow in (2017). In the second scheme, the scenario of visible-watermark removal for the image generated by any visible watermarking algorithms (no matter the sender and the receiver know the algorithms or not) is considered. The scheme can perfectly remove the embedded visible watermark and can also exactly recover original image with the assist of image inpainting technique. In addition, for both two proposed schemes, the invalid user without the knowledge of secret key cannot achieve reversible recovery for original image. Experimental results demonstrate the effectiveness and superiority of our schemes." ] }
1901.11461
2949409558
Mesh models are a promising approach for encoding the structure of 3D objects. Current mesh reconstruction systems predict uniformly distributed vertex locations of a predetermined graph through a series of graph convolutions, leading to compromises with respect to performance or resolution. In this paper, we argue that the graph representation of geometric objects allows for additional structure, which should be leveraged for enhanced reconstruction. Thus, we propose a system which properly benefits from the advantages of the geometric structure of graph encoded objects by introducing (1) a graph convolutional update preserving vertex information; (2) an adaptive splitting heuristic allowing detail to emerge; and (3) a training objective operating both on the local surfaces defined by vertices as well as the global structure defined by the mesh. Our proposed method is evaluated on the task of 3D object reconstruction from images with the ShapeNet dataset, where we demonstrate state of the art performance, both visually and numerically, while having far smaller space requirements by generating adaptive meshes
Mesh models have only recently been used in generation and reconstruction tasks due to the challenging nature of their complex definition @cite_23 . Recent mesh approaches rely on graph representations of meshes, and use GCNs @cite_28 to effectively process them. Our work most closely relates to Neural 3D Mesh Renderer @cite_12 and Pixel2Mesh @cite_23 , which use deformations of a generic pre-defined input mesh, generally a sphere, to form 3D structures. Similarly, Atlas-Net @cite_53 uses deformations over a set of primitive square faces to form 3D shapes. Conceptually similar, there exists numerous papers using class-specific input meshes which are deformed with respect to the given input image @cite_51 @cite_33 @cite_29 @cite_24 @cite_10 @cite_44 . While effective, these approaches require prior knowledge on the target class or access to a model repository.
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_10", "@cite_53", "@cite_29", "@cite_24", "@cite_44", "@cite_23", "@cite_51", "@cite_12" ], "mid": [ "2790664405", "2519887557", "2883758202", "2962885944", "", "2883993491", "", "2796312544", "2773522905", "2769112042" ], "abstract": [ "We present a learning framework for recovering the 3D shape, camera, and texture of an object from a single image. The shape is represented as a deformable 3D mesh model of an object category where a shape is parameterized by a learned mean shape and per-instance predicted deformation. Our approach allows leveraging an annotated image collection for training, where the deformable model and the 3D prediction mechanism are learned without relying on ground-truth 3D or multi-view supervision. Our representation enables us to go beyond existing 3D prediction approaches by incorporating texture inference as prediction of an image in a canonical appearance space. Additionally, we show that semantic keypoints can be easily associated with the predicted shapes. We present qualitative and quantitative results of our approach on the CUB dataset, and show that we can learn to predict the diverse shapes and textures across birds using only an annotated image collection. We also demonstrate the the applicability of our method for learning the 3D structure of other generic categories.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We show that these correspondences can be improved by an additional step which improves the shape feature by minimizing the Chamfer distance between the input and transformed template. We demonstrate that our simple approach improves on state-of-the-art results on the difficult FAUST-inter challenge, with an average correspondence error of 2.88 cm. We show, on the TOSCA dataset, that our method is robust to many types of perturbations, and generalizes to non-human shapes. This robustness allows it to perform well on real unclean, meshes from the SCAPE dataset.", "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) auto-encoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.", "", "We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to these existing approaches, while also supporting weaker supervision scenarios. Importantly, it can be trained purely from 2D images, without ground-truth pose annotations, and with a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach on synthetic data in various settings, showing that (i) it learns to disentangle shape from pose; (ii) using shading in the loss improves performance; (iii) our model is comparable or superior to state-of-the-art voxel-based approaches on quantitative metrics, while producing results that are visually more pleasing; (iv) it still performs well when given supervision weaker than in prior works.", "", "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.", "One challenge that remains open in 3D deep learning is how to efficiently represent 3D data to feed deep networks. Recent works have relied on volumetric or point cloud representations, but such approaches suffer from a number of issues such as computational complexity, unordered data, and lack of finer geometry. This paper demonstrates that a mesh representation (i.e. vertices and faces to form polygonal surfaces) is able to capture fine-grained geometry for 3D reconstruction tasks. A mesh however is also unstructured data similar to point clouds. We address this problem by proposing a learning framework to infer the parameters of a compact mesh representation rather than learning from the mesh itself. This compact representation encodes a mesh using free-form deformation and a sparse linear combination of models allowing us to reconstruct 3D meshes from single images. In contrast to prior work, we do not rely on silhouettes and landmarks to perform 3D reconstruction. We evaluate our method on synthetic and real-world datasets with very promising results. Our framework efficiently reconstructs 3D objects in a low-dimensional way while preserving its important geometrical aspects.", "For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer." ] }
1901.11461
2949409558
Mesh models are a promising approach for encoding the structure of 3D objects. Current mesh reconstruction systems predict uniformly distributed vertex locations of a predetermined graph through a series of graph convolutions, leading to compromises with respect to performance or resolution. In this paper, we argue that the graph representation of geometric objects allows for additional structure, which should be leveraged for enhanced reconstruction. Thus, we propose a system which properly benefits from the advantages of the geometric structure of graph encoded objects by introducing (1) a graph convolutional update preserving vertex information; (2) an adaptive splitting heuristic allowing detail to emerge; and (3) a training objective operating both on the local surfaces defined by vertices as well as the global structure defined by the mesh. Our proposed method is evaluated on the task of 3D object reconstruction from images with the ShapeNet dataset, where we demonstrate state of the art performance, both visually and numerically, while having far smaller space requirements by generating adaptive meshes
The great success of convolutional neural networks in numerous image-based tasks @cite_37 @cite_15 @cite_14 @cite_2 @cite_61 has led to increasing efforts to extend deep networks to domains where graph-structured data is ubiquitous.
{ "cite_N": [ "@cite_61", "@cite_37", "@cite_14", "@cite_2", "@cite_15" ], "mid": [ "2963974947", "2302255633", "2963446712", "2559597482", "" ], "abstract": [ "State-of-the-art semantic segmentation approaches increase the receptive field of their models by using either a downsampling path composed of poolings strided convolutions or successive dilated convolutions. However, it is not clear which operation leads to best results. In this paper, we systematically study the differences introduced by distinct receptive field enlargement methods and their impact on the performance of a novel architecture, called Fully Convolutional DenseResNet (FC-DRN). FC-DRN has a densely connected backbone composed of residual networks. Following standard image segmentation architectures, receptive field enlargement operations that change the representation level are interleaved among residual networks. This allows the model to exploit the benefits of both residual and dense connectivity patterns, namely: gradient flow, iterative refinement of representations, multi-scale feature combination and deep supervision. In order to highlight the potential of our model, we test it on the challenging CamVid urban scene understanding benchmark and make the following observations: 1) downsampling operations outperform dilations when the model is trained from scratch, 2) dilations are useful during the finetuning step of the model, 3) coarser representations require less refinement steps, and 4) ResNets (by model construction) are good regularizers, since they can reduce the model capacity when needed. Finally, we compare our architecture to alternative methods and report state-of-the-art result on the Camvid dataset, with at least twice fewer parameters.", "Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.,,,,,, Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.,,,,,, In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.", "" ] }
1901.11461
2949409558
Mesh models are a promising approach for encoding the structure of 3D objects. Current mesh reconstruction systems predict uniformly distributed vertex locations of a predetermined graph through a series of graph convolutions, leading to compromises with respect to performance or resolution. In this paper, we argue that the graph representation of geometric objects allows for additional structure, which should be leveraged for enhanced reconstruction. Thus, we propose a system which properly benefits from the advantages of the geometric structure of graph encoded objects by introducing (1) a graph convolutional update preserving vertex information; (2) an adaptive splitting heuristic allowing detail to emerge; and (3) a training objective operating both on the local surfaces defined by vertices as well as the global structure defined by the mesh. Our proposed method is evaluated on the task of 3D object reconstruction from images with the ShapeNet dataset, where we demonstrate state of the art performance, both visually and numerically, while having far smaller space requirements by generating adaptive meshes
Early attempts to extend neural networks to deal with arbitrarily structured graphs relied on recursive neural networks @cite_17 @cite_64 @cite_34 . Recently, spectral approaches have emerged as an effective alternative which formulates the convolution as an operation on the spectrum of the graph @cite_18 @cite_59 @cite_62 @cite_9 . Methods operating directly on the graph domain have also been presented. proposed to approximate the filters using the Chebyshev polynomials applied on the Laplacian operator. This approximation was further simplified by . Finally, several works have been introduced exploring well-established deep learning ideas and improving previously reported results @cite_8 @cite_5 @cite_6 @cite_11 .
{ "cite_N": [ "@cite_18", "@cite_64", "@cite_62", "@cite_11", "@cite_8", "@cite_9", "@cite_6", "@cite_59", "@cite_5", "@cite_34", "@cite_17" ], "mid": [ "637153065", "1501856433", "2558748708", "", "", "2618170429", "2558460151", "2964311892", "2962767366", "2116341502", "2127827747" ], "abstract": [ "Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model.", "Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them.", "", "", "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification and matrix completion tasks.", "Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.", "Abstract: Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.", "A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden state-space representation. We introduce a graphical formalism for representing this class of adaptive transductions by means of recursive networks, i.e., cyclic graphs where nodes are labeled by variables and edges are labeled by generalized delay elements. This representation makes it possible to incorporate the symbolic and subsymbolic nature of data. Structures are processed by unfolding the recursive network into an acyclic graph called encoding network. In so doing, inference and learning algorithms can be easily inherited from the corresponding algorithms for artificial neural networks or probabilistic graphical model." ] }
1901.10997
2913686213
Many long short-term memory (LSTM) applications need fast yet compact models. Neural network compression approaches, such as the grow-and-prune paradigm, have proved to be promising for cutting down network complexity by skipping insignificant weights. However, current compression strategies are mostly hardware-agnostic and network complexity reduction does not always translate into execution efficiency. In this work, we propose a hardware-guided symbiotic training methodology for compact, accurate, yet execution-efficient inference models. It is based on our observation that hardware may introduce substantial non-monotonic behavior, which we call the latency hysteresis effect, when evaluating network size vs. inference latency. This observation raises question about the mainstream smaller-dimension-is-better compression strategy, which often leads to a sub-optimal model architecture. By leveraging the hardware-impacted hysteresis effect and sparsity, we are able to achieve the symbiosis of model compactness and accuracy with execution efficiency, thus reducing LSTM latency while increasing its accuracy. We have evaluated our algorithms on language modeling and speech recognition applications. Relative to the traditional stacked LSTM architecture obtained for the Penn Treebank dataset, we reduce the number of parameters by 18.0x (30.5x) and measured run-time latency by up to 2.4x (5.2x) on Nvidia GPUs (Intel Xeon CPUs) without any accuracy degradation. For the DeepSpeech2 architecture obtained for the AN4 dataset, we reduce the number of parameters by 7.0x (19.4x), word error rate from 12.9 to 9.9 (10.4 ), and measured run-time latency by up to 1.7x (2.4x) on Nvidia GPUs (Intel Xeon CPUs). Thus, our method yields compact, accurate, yet execution-efficient inference models.
Various attempts have been made to improve the efficiency of LSTM models. One direction focuses on improving the LSTM cells. The gated recurrent unit (GRU) utilizes reset and update gates to achieve a similar performance to an LSTM while reducing computational cost @cite_5 . Quasi-RNN explores the intrinsic parallelism of time series data to outperform an LSTM for the same hidden state width @cite_14 . H-LSTM incorporates deeper control gates to reduce the number of external stacked layers. It achieves higher accuracy than the GRU and LSTM with fewer parameters @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_14" ], "mid": [ "2806862281", "2172140247", "2952436057" ], "abstract": [ "Long short-term memory (LSTM) has been widely used for sequential data modeling. Researchers have increased LSTM depth by stacking LSTM cells to improve performance. This incurs model redundancy, increases run-time delay, and makes the LSTMs more prone to overfitting. To address these problems, we propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original one level non-linear control gates. H-LSTM increases accuracy while employing fewer external stacked layers, thus reducing the number of parameters and run-time latency significantly. We employ grow-and-prune (GP) training to iteratively adjust the hidden layers through gradient-based growth and magnitude-based pruning of connections. This learns both the weights and the compact architecture of H-LSTM control gates. We have GP-trained H-LSTMs for image captioning and speech recognition applications. For the NeuralTalk architecture on the MSCOCO dataset, our three models reduce the number of parameters by 38.7x [floating-point operations (FLOPs) by 45.5x], run-time latency by 4.5x, and improve the CIDEr score by 2.6. For the DeepSpeech2 architecture on the AN4 dataset, our two models reduce the number of parameters by 19.4x (FLOPs by 23.5x), run-time latency by 15.7 , and the word error rate from 12.9 to 8.7 . Thus, GP-trained H-LSTMs can be seen to be compact, fast, and accurate.", "Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network. We show that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. Furthermore, we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically.", "Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks." ] }
1901.10997
2913686213
Many long short-term memory (LSTM) applications need fast yet compact models. Neural network compression approaches, such as the grow-and-prune paradigm, have proved to be promising for cutting down network complexity by skipping insignificant weights. However, current compression strategies are mostly hardware-agnostic and network complexity reduction does not always translate into execution efficiency. In this work, we propose a hardware-guided symbiotic training methodology for compact, accurate, yet execution-efficient inference models. It is based on our observation that hardware may introduce substantial non-monotonic behavior, which we call the latency hysteresis effect, when evaluating network size vs. inference latency. This observation raises question about the mainstream smaller-dimension-is-better compression strategy, which often leads to a sub-optimal model architecture. By leveraging the hardware-impacted hysteresis effect and sparsity, we are able to achieve the symbiosis of model compactness and accuracy with execution efficiency, thus reducing LSTM latency while increasing its accuracy. We have evaluated our algorithms on language modeling and speech recognition applications. Relative to the traditional stacked LSTM architecture obtained for the Penn Treebank dataset, we reduce the number of parameters by 18.0x (30.5x) and measured run-time latency by up to 2.4x (5.2x) on Nvidia GPUs (Intel Xeon CPUs) without any accuracy degradation. For the DeepSpeech2 architecture obtained for the AN4 dataset, we reduce the number of parameters by 7.0x (19.4x), word error rate from 12.9 to 9.9 (10.4 ), and measured run-time latency by up to 1.7x (2.4x) on Nvidia GPUs (Intel Xeon CPUs). Thus, our method yields compact, accurate, yet execution-efficient inference models.
Network compression techniques, such as the grow-and-prune paradigm, have recently emerged as another direction for reducing LSTM redundancy. The pruning method was initially shown to be effective on large CNNs by demonstrating the reduction in the number of parameters in AlexNet by 9 @math and VGG by 13 @math for the well-known ImageNet dataset, without any accuracy loss @cite_37 . Follow-up works have successfully scaled this technique to LSTMs @cite_40 @cite_0 @cite_38 . For example, a recent work proposes structured pruning for LSTMs through group LASSO regularization @cite_0 . Network growth is a complementary method to pruning. It enables a more sparse yet accurate model to be obtained before pruning starts @cite_31 . A grow-and-prune paradigm typically reduces the number of parameters in CNNs @cite_31 and LSTMs @cite_19 by another 2 @math . However, all these methods are hardware-agnostic. Most of them utilize monotonic optimization metrics, e.g., smaller matrix dimensions or fewer multiply-accumulate operations, hence optimize towards slimmer or more sparse models that may not necessarily translate into execution efficiency.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_0", "@cite_19", "@cite_40", "@cite_31" ], "mid": [ "2608554408", "2963674932", "2754526845", "2806862281", "", "2768083806" ], "abstract": [ "Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90 and speed-up is around 2x to 7x.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is publicly available at this https URL", "Long short-term memory (LSTM) has been widely used for sequential data modeling. Researchers have increased LSTM depth by stacking LSTM cells to improve performance. This incurs model redundancy, increases run-time delay, and makes the LSTMs more prone to overfitting. To address these problems, we propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original one level non-linear control gates. H-LSTM increases accuracy while employing fewer external stacked layers, thus reducing the number of parameters and run-time latency significantly. We employ grow-and-prune (GP) training to iteratively adjust the hidden layers through gradient-based growth and magnitude-based pruning of connections. This learns both the weights and the compact architecture of H-LSTM control gates. We have GP-trained H-LSTMs for image captioning and speech recognition applications. For the NeuralTalk architecture on the MSCOCO dataset, our three models reduce the number of parameters by 38.7x [floating-point operations (FLOPs) by 45.5x], run-time latency by 4.5x, and improve the CIDEr score by 2.6. For the DeepSpeech2 architecture on the AN4 dataset, our two models reduce the number of parameters by 19.4x (FLOPs by 23.5x), run-time latency by 15.7 , and the word error rate from 12.9 to 8.7 . Thus, GP-trained H-LSTMs can be seen to be compact, fast, and accurate.", "", "Deep neural networks (DNNs) have begun to have a pervasive impact on various applications of machine learning. However, the problem of finding an optimal DNN architecture for large applications is challenging. Common approaches go for deeper and larger DNN architectures but may incur substantial redundancy. To address these problems, we introduce a network growth algorithm that complements network pruning to learn both weights and compact DNN architectures during training. We propose a DNN synthesis tool (NeST) that combines both methods to automate the generation of compact and accurate DNNs. NeST starts with a randomly initialized sparse network called the seed architecture. It iteratively tunes the architecture with gradient-based growth and magnitude-based pruning of neurons and connections. Our experimental results show that NeST yields accurate, yet very compact DNNs, with a wide range of seed architecture selection. For the LeNet-300-100 (LeNet-5) architecture, we reduce network parameters by 70.2x (74.3x) and floating-point operations (FLOPs) by 79.4x (43.7x). For the AlexNet and VGG-16 architectures, we reduce network parameters (FLOPs) by 15.7x (4.6x) and 30.2x (8.6x), respectively. NeST's grow-and-prune paradigm delivers significant additional parameter and FLOPs reduction relative to pruning-only methods." ] }
1901.11168
2972498781
The emergence of continuous health monitoring and the availability of an enormous amount of time series data has provided a great opportunity for the advancement of personal health tracking. In recent years, unsupervised learning methods have drawn special attention of researchers to tackle the sparse annotation of health data and real-time detection of anomalies has been a central problem of interest. However, one problem that has not been well addressed before is the early prediction of forthcoming negative health events. Early signs of an event can introduce subtle and gradual changes in the health signal prior to its onset, detection of which can be invaluable in effective prevention. In this study, we first demonstrate our observations on the shortcoming of widely adopted anomaly detection methods in uncovering the changes prior to a negative health event. We then propose a framework which relies on online clustering of signal segment representations which are automatically learned by a specially designed LSTM auto-encoder. We benchmark our results on the publicly available MIT-PICS dataset and show the effectiveness of our approach by predicting Bradycardia events in infants 1.3 minutes ahead of time with 68 AUC score on average, with no label supervision. Results of our study can indicate the viability of our approach in the early detection of health events in other applications as well.
With the rise of unsupervised deep learning models especially auto-encoders @cite_19 and their great performance in other domains such as image recognition @cite_1 , their application has recently emerged in wireless health for detection of anomalies in health signals such as ECG signals. @cite_7 @cite_14 are among studies that employed auto-encoders on ECG to distinguish anomalous parts from the healthy ones. For this aim, the reconstruction error from the auto-encoder that is trained on normal data is tracked to find sudden jumps, motivated by the idea that such a model cannot reconstruct anomalous intervals of data accurately. Auto-encoders have successfully replaced prior approaches such as classifiers @cite_6 which require large annotated datasets, alongside statistical clustering models @cite_4 , and future value predictor models @cite_12 that both are not easily generalizable to other applications.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_1", "@cite_6", "@cite_19", "@cite_12" ], "mid": [ "", "2142960677", "2474046725", "2469469599", "2122646361", "2617585083", "1999840231" ], "abstract": [ "", "Since security threats to WSNs are increasingly being diversified and deliberate, prevention-based techniques alone can no longer provide WSNs with adequate security. However, detection-based techniques might be effective in collaboration with prevention-based techniques for securing WSNs. As a significant branch of detection-based techniques, the research of anomaly detection in wired networks and wireless ad hoc networks is already quite mature, but such solutions can be rarely applied to WSNs without any change, because WSNs are characterized by constrained resources, such as limited energy, weak computation capability, poor memory, short communication range, etc. The development of anomaly detection techniques suitable for WSNs is therefore regarded as an essential research area, which will enable WSNs to be much more secure and reliable. In this survey paper, a few of the key design principles relating to the development of anomaly detection techniques in WSNs are discussed in particular. Then, the state-of-the-art techniques of anomaly detection in WSNs are systematically introduced, according to WSNs' architectures (Hierarchical Flat) and detection technique categories (statistical techniques, rule based, data mining, computational intelligence, game theory, graph based, and hybrid, etc.). The analyses and comparisons of the approaches that belong to a similar technique category are represented technically, followed by a brief discussion towards the potential research areas in the near future and conclusion.", "Mechanical devices such as engines, vehicles, aircrafts, etc., are typically instrumented with numerous sensors to capture the behavior and health of the machine. However, there are often external factors or variables which are not captured by sensors leading to time-series which are inherently unpredictable. For instance, manual controls and or unmonitored environmental conditions or load may lead to inherently unpredictable time-series. Detecting anomalies in such scenarios becomes challenging using standard approaches based on mathematical models that rely on stationarity, or prediction models that utilize prediction errors to detect anomalies. We propose a Long Short Term Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD) that learns to reconstruct 'normal' time-series behavior, and thereafter uses reconstruction error to detect anomalies. We experiment with three publicly available quasi predictable time-series datasets: power demand, space shuttle, and ECG, and two real-world engine datasets with both predictive and unpredictable behavior. We show that EncDec-AD is robust and can detect anomalies from predictable, unpredictable, periodic, aperiodic, and quasi-periodic time-series. Further, we show that EncDec-AD is able to detect anomalies from short time-series (length as small as 30) as well as long time-series (length as large as 500).", "In this invited paper, my overview material on the same topic as presented in the plenary overview session of APSIPA-2011 and the tutorial material presented in the same conference (Deng, 2011) are expanded and updated to include more recent developments in deep learning. The previous and the updated materials cover both theory and applications, and analyze its future directions. The goal of this tutorial survey is to introduce the emerging area of deep learning or hierarchical learning to the APSIPA community. Deep learning refers to a class of machine learning techniques, developed largely since 2006, where many stages of nonlinear information processing in hierarchical architectures are exploited for pattern classification and for feature learning. In the more recent literature, it is also connected to representation learning, which involves a hierarchy of features or concepts where higher-level concepts are defined from lower-level ones and where the same lower-level concepts help to define higher-level ones. In this tutorial, a brief history of deep learning research is discussed first. Then, a classificatory scheme is developed to analyze and summarize major work reported in the deep learning literature. Using this scheme, I provide a taxonomy-oriented survey on the existing deep architectures and algorithms in the literature, and categorize them into three classes: generative, discriminative, and hybrid. Three representative deep architectures --deep auto-encoder, deep stacking network, and deep neural network (pre-trained with deep belief network) --one in each of the three classes, are presented in more detail. Next, selected applications of deep learning are reviewed in broad areas of signal and information processing including audio speech, image vision, multimodality, language modeling, natural language processing, and information retrieval. Finally, future directions of deep learning are discussed and analyzed.", "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.", "Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.", "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR)." ] }
1901.11168
2972498781
The emergence of continuous health monitoring and the availability of an enormous amount of time series data has provided a great opportunity for the advancement of personal health tracking. In recent years, unsupervised learning methods have drawn special attention of researchers to tackle the sparse annotation of health data and real-time detection of anomalies has been a central problem of interest. However, one problem that has not been well addressed before is the early prediction of forthcoming negative health events. Early signs of an event can introduce subtle and gradual changes in the health signal prior to its onset, detection of which can be invaluable in effective prevention. In this study, we first demonstrate our observations on the shortcoming of widely adopted anomaly detection methods in uncovering the changes prior to a negative health event. We then propose a framework which relies on online clustering of signal segment representations which are automatically learned by a specially designed LSTM auto-encoder. We benchmark our results on the publicly available MIT-PICS dataset and show the effectiveness of our approach by predicting Bradycardia events in infants 1.3 minutes ahead of time with 68 AUC score on average, with no label supervision. Results of our study can indicate the viability of our approach in the early detection of health events in other applications as well.
LSTM auto-encoders @cite_0 were later introduced in learning representations of videos and improved feature extraction by capturing temporal features of the signal. They were later used in time series analysis as well @cite_17 . Moreover, Two recent studies have shown improved performance of auto-encoders in more complex anomaly detection settings by utilizing the encoded representation from auto-encoders in offline clustering of anomalies @cite_5 or detection of signal change point by comparing neighbor segment representations @cite_13 . Although these studies follow different goals, we employ their finding in this study in building our model.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2116435618", "2785745290", "2782961382", "2902455138" ], "abstract": [ "We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "Clustering is essential to many tasks in pattern recognition and computer vision. With the advent of deep learning, there is an increasing interest in learning deep unsupervised representations for clustering analysis. Many works on this domain rely on variants of auto-encoders and use the encoder outputs as representations features for clustering. In this paper, we show that an l2 normalization constraint on these representations during auto-encoder training, makes the representations more separable and compact in the Euclidean space after training. This greatly improves the clustering accuracy when k-means clustering is employed on the representations. We also propose a clustering based unsupervised anomaly detection method using l2 normalized deep auto-encoder representations. We show the effect of l2 normalization on anomaly detection accuracy. We further show that the proposed anomaly detection method greatly improves accuracy compared to previously proposed deep methods such as reconstruction error based anomaly detection.", "Internet of things (IoT) applications have become increasingly popular in recent years, with applications ranging from building energy monitoring to personal health tracking and activity recognition. In order to leverage these data, automatic knowledge extraction - whereby we map from observations to interpretable states and transitions - must be done at scale. As such, we have seen many recent IoT data sets include annotations with a human expert specifying states, recorded as a set of boundaries and associated labels in a data sequence. These data can be used to build automatic labeling algorithms that produce labels as an expert would. Here, we refer to human-specified boundaries as breakpoints. Traditional changepoint detection methods only look for statistically-detectable boundaries that are defined as abrupt variations in the generative parameters of a data sequence. However, we observe that breakpoints occur on more subtle boundaries that are non-trivial to detect with these statistical methods. In this work, we propose a new unsupervised approach, based on deep learning, that outperforms existing techniques and learns the more subtle, breakpoint boundaries with a high accuracy. Through extensive experiments on various real-world data sets - including human-activity sensing data, speech signals, and electroencephalogram (EEG) activity traces - we demonstrate the effectiveness of our algorithm for practical applications. Furthermore, we show that our approach achieves significantly better performance than previous methods.", "" ] }
1901.11168
2972498781
The emergence of continuous health monitoring and the availability of an enormous amount of time series data has provided a great opportunity for the advancement of personal health tracking. In recent years, unsupervised learning methods have drawn special attention of researchers to tackle the sparse annotation of health data and real-time detection of anomalies has been a central problem of interest. However, one problem that has not been well addressed before is the early prediction of forthcoming negative health events. Early signs of an event can introduce subtle and gradual changes in the health signal prior to its onset, detection of which can be invaluable in effective prevention. In this study, we first demonstrate our observations on the shortcoming of widely adopted anomaly detection methods in uncovering the changes prior to a negative health event. We then propose a framework which relies on online clustering of signal segment representations which are automatically learned by a specially designed LSTM auto-encoder. We benchmark our results on the publicly available MIT-PICS dataset and show the effectiveness of our approach by predicting Bradycardia events in infants 1.3 minutes ahead of time with 68 AUC score on average, with no label supervision. Results of our study can indicate the viability of our approach in the early detection of health events in other applications as well.
Prediction of Bradycardia in infants using the PICS dataset was approached before by publishers of the dataset with statistical methods @cite_2 . They specifically used a point process analysis and tried to capture the differences in variance and mean of signal segments before a Bradycardia event. Although this study proves the feasibility and achieves reasonable accuracy, their approach is supervised, hand-engineered, and heavily relies on the observance of multiple onsets of Bradycardia events in each infant, which is not always possible in the real-world setting. This is while our approach focuses on the straightforward collection of normal signals from individuals and the detection of changes in an unsupervised and automatic manner.
{ "cite_N": [ "@cite_2" ], "mid": [ "2557126235" ], "abstract": [ "Objective: Episodes of bradycardia are common and recur sporadically in preterm infants, posing a threat to the developing brain and other vital organs. We hypothesize that bradycardias are a result of transient temporal destabilization of the cardiac autonomic control system and that fluctuations in the heart rate signal might contain information that precedes bradycardia. We investigate infant heart rate fluctuations with a novel application of point process theory. Methods: In ten preterm infants, we estimate instantaneous linear measures of the heart rate signal, use these measures to extract statistical features of bradycardia, and propose a simplistic framework for prediction of bradycardia. Results: We present the performance of a prediction algorithm using instantaneous linear measures (mean area under the curve = 0.79 ± 0.018) for over 440 bradycardia events. The algorithm achieves an average forecast time of 116 s prior to bradycardia onset (FPR = 0.15). Our analysis reveals that increased variance in the heart rate signal is a precursor of severe bradycardia. This increase in variance is associated with an increase in power from low content dynamics in the LF band (0.04–0.2 Hz) and lower multiscale entropy values prior to bradycardia. Conclusion: Point process analysis of the heartbeat time series reveals instantaneous measures that can be used to predict infant bradycardia prior to onset. Significance: Our findings are relevant to risk stratification, predictive monitoring, and implementation of preventative strategies for reducing morbidity and mortality associated with bradycardia in neonatal intensive care units." ] }
1901.11220
2912240042
Initial access (IA) is a fundamental procedure in cellular systems where user equipment (UE) detects base station (BS) and acquires synchronization. Due to the necessity of using antenna arrays for IA in millimeter-wave (mmW) systems, BS simultaneously performs beam training to acquire angular channel state information. The state-of-the-art directional IA (DIA) uses a set of narrow sounding beams in IA, where different beam pairs are sequentially measured, and the best candidate is determined. However, the directional beam training accuracy depends on scanning beam angular resolution, and consequently its improvement requires additional dedicated radio resources, access latency, and overhead. To remedy the problem of access latency and overhead in DIA, this paper proposes to use quasi-omni pseudorandom sounding beams for IA, and develops an algorithm for joint initial access and fine resolution initial beam training without requiring additional radio resources. It comprehensively models realistic timing and frequency synchronization errors encountered in IA. We provide the analysis of the proposed algorithm's miss detection rate under timing synchronization errors, and we further derive Cramer–Rao lower bound of angular estimation under frequency offset, considering the 5G-NR compliant IA procedure. To accommodate the ever increasing bandwidth for beam training in standard evolution beyond 5G, we design the beam squint robust algorithm. For realistic performance evaluation under mmW channels, we use QuaDRiGa simulator with mmMAGIC model at 28 GHz to show that the proposed approach is advantageous to DIA. The proposed algorithm offers orders of magnitude access latency saving compared to DIA, when the same discovery, post training SNR, and overhead performance are targeted. This conclusion holds true in various propagation environments and three-dimensional locations of a mmW pico-cell with up to 140 m radius. Furthermore, our results demonstrate that the proposed beam squint robust algorithm is able to retain unaffected performance with increased beam training bandwidth.
The alternative approaches for beam training are based on parametric channel estimation @cite_31 @cite_24 @cite_32 @cite_18 @cite_25 @cite_17 @cite_42 @cite_45 . Exploiting the mmW sparse scattering nature, compressive sensing (CS) approaches have been considered to effectively estimate channel parameters based on channel observations obtained via various sounding beams. Works @cite_31 @cite_24 proposed a CS-based narrowband BF training with pseudorandom sounding beamformers in the downlink, and @cite_32 extended this approach for a wideband channel. Other related works include channel covariance estimation @cite_18 @cite_25 @cite_17 which requires periodic channel observations, and UE centric uplink training @cite_42 @cite_45 . It is worth nothing that all recent works focus on channel estimation alone while assuming perfect cell discovery and synchronization. The 5G-NR frame structure that supports IA is rarely considered, and further the feasibility of joint initial access and CS-based beam training has not been investigated.
{ "cite_N": [ "@cite_18", "@cite_42", "@cite_32", "@cite_24", "@cite_45", "@cite_31", "@cite_25", "@cite_17" ], "mid": [ "2401839749", "2792650514", "2964003927", "1483249443", "2591812953", "624827785", "2964311703", "2963304087" ], "abstract": [ "In millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems, channel estimation is a challenging task in terms of acquiring the instantaneous channel state information (CSI), because both the estimation complexity and the overhead required for pilot symbols and feedback grow drastically as the number of antennas increases. Alternatively, some channel statistics in the form of partial CSI are sufficient for transceiver optimization and noncoherent detection in time-varying wireless environments. To obtain such useful statistical information accurately and efficiently, this paper proposes a new channel statistics estimation method using the compressive covariance sensing technique, which directly estimates the desired second-order statistics of the channel while bypassing the intermediate recovery of the instantaneous channel matrix itself. A diagonal-search orthogonal matching pursuit (DS-OMP) algorithm is developed for fast channel estimation. The proposed algorithm has low computational complexity and reduced overhead in training and feedback, owing to its proper utilization of the joint sparsity structure of the channel covariance matrix.", "Configuring the hybrid precoders and combiners in a millimeter wave multiuser multiple-input multiple-output system is challenging in frequency selective channels. In this paper, we develop a system that uses compressive estimation on the uplink to configure precoders and combiners for the downlink. In the first step, the base station (BS) simultaneously estimates the channels from all the mobile stations on each subcarrier. To reduce the number of measurements required, compressed sensing techniques are developed that exploit common support on the different subcarriers. In the second step, exploiting reciprocity and the channel estimates the BS designs hybrid precoders and combiners. Two algorithms are developed for this purpose, with different performance and complexity tradeoffs: First, a factorization of the purely digital solution; and second, an iterative hybrid design. Extensive numerical experiments evaluate the proposed solutions comparing to the state-of-the-art strategies, and illustrating design tradeoffs in overhead, complexity, and performance.", "Channel estimation is useful in millimeter wave (mm-wave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics, such as mutual information or signal-to-interference noise ratio. At mm-wave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of the prior work on channel estimation for hybrid architectures assumes a frequency-flat mm-wave channel model. In this paper, we consider a frequency-selective mm-wave channel and propose compressed sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose tradeoffs in complexity overhead performance as compared with those of previous approaches.", "Millimeter wave (mmWave) systems will likely employ directional beamforming with large antenna arrays at both the transmitters and receivers. Acquiring channel knowledge to design these beamformers, however, is challenging due to the large antenna arrays and small signal-to-noise ratio before beamforming. In this paper, we propose and evaluate a downlink system operation for multi-user mmWave systems based on compressed sensing channel estimation and conjugate analog beamforming. Adopting the achievable sum-rate as a performance metric, we show how many compressed sensing measurements are needed to approach the perfect channel knowledge performance. The results illustrate that the proposed algorithm requires an order of magnitude less training overhead compared with traditional lower-frequency solutions, while employing mmWave-suitable hardware. They also show that the number of measurements need to be optimized to handle the trade-off between the channel estimate quality and the training overhead.", "In this paper, we develop a low-complexity channel estimation for hybrid millimeter wave (mmWave) systems, where the number of radio frequency (RF) chains is much less than the number of antennas equipped at each transceiver. The proposed mmWave channel estimation algorithm first exploits multiple frequency tones to estimate the strongest angle-of-arrivals (AoAs) at both base station (BS) and user sides for the design of analog beamforming matrices. Then, all the users transmit orthogonal pilot symbols to the BS along the directions of the estimated strongest AoAs in order to estimate the channel. The estimated channel will be adopted to design the digital zero-forcing (ZF) precoder at the BS for the multi-user downlink transmission. The proposed channel estimation algorithm is applicable to both the non-sparse and sparse mmWave channel environments. Furthermore, we derive a tight achievable rate upper bound of the digital ZF precoding with the proposed channel estimation algorithm scheme. Our analytical and simulation results show that the proposed scheme obtains a considerable achievable rate of fully digital systems, where the number of RF chains equipped at each transceiver is equal to the number of antennas. Besides, considering the effect of various types of errors, i.e., random phase errors, transceiver analog beamforming errors, and equivalent channel estimation errors, we derive a closed-form approximation for the achievable rate of the considered scheme. We illustrate the robustness of the proposed channel estimation and multi-user downlink precoding scheme against the system imperfection.", "We propose and investigate a compressive architecture for estimation and tracking of sparse spatial channels in millimeter (mm) wave picocellular networks. The base stations are equipped with antenna arrays with a large number of elements (which can fit within compact form factors because of the small carrier wavelength) and employ radio frequency (RF) beamforming, so that standard least squares adaptation techniques (which require access to individual antenna elements) are not applicable. We focus on the downlink, and show that “compressive beacons,” transmitted using pseudorandom phase settings at the base station array, and compressively processed using pseudorandom phase settings at the mobile array, provide information sufficient for accurate estimation of the two-dimensional (2D) spatial frequencies associated with the directions of departure of the dominant rays from the base station, and the associated complex gains. This compressive approach is compatible with coarse phase-only control, and is based on a near-optimal sequential algorithm for frequency estimation which approaches the Cramer Rao Lower Bound. The algorithm exploits the geometric continuity of the channel across successive beaconing intervals to reduce the overhead to less than 1 even for very large ( @math ) arrays. Compressive beaconing is essentially omnidirectional, and hence does not enjoy the SNR and spatial reuse benefits of beamforming obtained during data transmission. We therefore discuss system level design considerations for ensuring that the beacon SNR is sufficient for accurate channel estimation, and that inter-cell beacon interference is controlled by an appropriate reuse scheme.", "Millimeter-wave (mm-wave) frequency bands provide an opportunity for much wider channel bandwidth compared with the traditional sub-6-GHz band. Communication at mm-waves is, however, quite challenging due to the severe propagation pathloss incurred by conventional isotropic antennas. To cope with this problem, directional beamforming both at the base station (BS) side and at the user equipment (UE) side is necessary in order to establish a strong path conveying enough signal power. Finding such beamforming directions is referred to as beam alignment (BA). This paper presents a new scheme for efficient BA. Our scheme finds a strong propagation path identified by an angle-of-arrival (AoA) and angle-of-departure (AoD) pair, by exploring the AoA–AoD domain through pseudo-random multi-finger beam patterns and constructing an estimate of the resulting second-order statistics (namely, the average received power for each pseudo-random beam configuration). The resulting under-determined system of equations is efficiently solved using non-negative constrained least-squares, yielding naturally a sparse non-negative vector solution whose maximum component identifies the optimal path. As a result, our scheme is highly robust to variations of the channel time dynamics compared with alternative concurrent approaches based on the estimation of the instantaneous channel coefficients, rather than of their second-order statistics. In the proposed scheme, the BS probes the channel in the downlink and trains simultaneously an arbitrarily large number of UEs. Thus, “beam refinement,” with multiple interactive rounds of downlink uplink transmissions, is not needed. This results in a scalable BA protocol, where the protocol overhead is virtually independent of the number of UEs, since all the UEs run the BA procedure at the same time. Extensive simulation results illustrate that our approach is superior to the state-of-the-art BA schemes proposed in the literature in terms of training overhead in multi-user scenarios and robustness to variations in the channel dynamics.", "Spatial channel covariance information can replace full knowledge of the entire channel matrix for designing analog precoders in hybrid multiple-input-multiple-output (MIMO) architecture. Spatial channel covariance estimation, however, is challenging for the hybrid MIMO architecture because the estimator operating at baseband can only obtain a lower dimensional pre-combined signal through fewer radio frequency chains than antennas. In this paper, we propose two approaches to covariance estimation based on compressive sensing techniques. One is to apply a time-varying sensing matrix, and the other is to exploit the prior knowledge that the covariance matrix is Hermitian. We present the rationale behind the two ideas and validate the superiority of the proposed methods by theoretical analysis and numerical simulations. We conclude the paper by extending the proposed algorithms from narrowband MIMO systems with a single receive antenna to wideband systems with multiple receive antennas." ] }
1901.11220
2912240042
Initial access (IA) is a fundamental procedure in cellular systems where user equipment (UE) detects base station (BS) and acquires synchronization. Due to the necessity of using antenna arrays for IA in millimeter-wave (mmW) systems, BS simultaneously performs beam training to acquire angular channel state information. The state-of-the-art directional IA (DIA) uses a set of narrow sounding beams in IA, where different beam pairs are sequentially measured, and the best candidate is determined. However, the directional beam training accuracy depends on scanning beam angular resolution, and consequently its improvement requires additional dedicated radio resources, access latency, and overhead. To remedy the problem of access latency and overhead in DIA, this paper proposes to use quasi-omni pseudorandom sounding beams for IA, and develops an algorithm for joint initial access and fine resolution initial beam training without requiring additional radio resources. It comprehensively models realistic timing and frequency synchronization errors encountered in IA. We provide the analysis of the proposed algorithm's miss detection rate under timing synchronization errors, and we further derive Cramer–Rao lower bound of angular estimation under frequency offset, considering the 5G-NR compliant IA procedure. To accommodate the ever increasing bandwidth for beam training in standard evolution beyond 5G, we design the beam squint robust algorithm. For realistic performance evaluation under mmW channels, we use QuaDRiGa simulator with mmMAGIC model at 28 GHz to show that the proposed approach is advantageous to DIA. The proposed algorithm offers orders of magnitude access latency saving compared to DIA, when the same discovery, post training SNR, and overhead performance are targeted. This conclusion holds true in various propagation environments and three-dimensional locations of a mmW pico-cell with up to 140 m radius. Furthermore, our results demonstrate that the proposed beam squint robust algorithm is able to retain unaffected performance with increased beam training bandwidth.
There are also recent works that consider some practical aspects of IA. For example, frequency offset robust algorithms in narrowband mmW beam training are reported in @cite_6 @cite_40 @cite_29 . There are several hardware prototypes that consider a practical approach of using received signal strength (RSS) in CS-based beam training. Channel estimation problem without phase measurement is a challenging problem, which was solved via novel signal processing algorithms based on RSS matching pursuit @cite_10 , Hash table @cite_5 , and sparse phase retrieval @cite_28 . Note that phase free measurements were associated with a particular testbed, and this constraint does not necessarily apply to mmW systems in general. In summary, while IA and beam training algorithms have been extensively studied in the literature, there is a lack of understanding about the theoretical limits and signal processing algorithms that jointly achieve cell discovery and accurate BF training using asynchronous IA signal in mmW frequency selective channel.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_6", "@cite_40", "@cite_5", "@cite_10" ], "mid": [ "2784857961", "2793871832", "2611243941", "2740506518", "2876820033", "2593251263" ], "abstract": [ "Millimeter (mm) wave massive MIMO has the potential for delivering orders of magnitude increases in mobile data rates, with compact antenna arrays providing narrow steerable beams for unprecedented levels of spatial reuse. A fundamental technical bottleneck, however, is rapid spatial channel estimation and beam adaptation in the face of mobility and blockage. Recently proposed compressive techniques which exploit the sparsity of mm wave channels are a promising approach to this problem, with overhead scaling linearly with the number of dominant paths and logarithmically with the number of array elements. Further, they can be implemented with RF beamforming with low-precision phase control. However, these methods make implicit assumptions on long-term phase coherence that are not satisfied by existing hardware. In this paper, we propose and evaluate a noncoherent compressive channel estimation technique which can estimate a sparse spatial channel based on received signal strength (RSS) alone, and is compatible with off-the-shelf hardware. The approach is based on cascading phase retrieval (i.e., recovery of complex-valued measurements from RSS measurements, up to a scalar multiple) with coherent compressive estimation. While a conventional cascade scheme would multiply two measurement matrices to obtain an overall matrix whose entries are in a continuum, a key novelty in our scheme is that we constrain the overall measurement matrix to be implementable using coarsely quantized pseudorandom phases, employing a virtual decomposition of the matrix into a product of measurement matrices for phase retrieval and compressive estimation. Theoretical and simulation results show that our noncoherent method scales almost as well with array size as its coherent counterpart, thus inheriting the scalability and low overhead of the latter.", "We propose an algorithm for mmWave channel estimation in the presence of a non-negligible carrier frequency offset (CFO) impairment. The algorithm exploits inherent sparsity of the mmWave channel in the angle of departure (AoD) and arrival (AoA) domain, employs sparse channel sensing protocol, and performs blind CFO compensation and estimation of channel paths in the angular domain. The simulation tests show that the proposed algorithm considerably outperforms the conventional orthogonal matching pursuit (OMP)-based mmWave channel estimation algorithm under different CFO and signal-to-noise ratio (SNR) regimes. Aside from the mmWave channel estimation problem, the algorithm is more generally applicable for greedy sparse recovery problems associated with estimating a sparse vector from measurements impaired by frequency offset.", "The abundant spectrum at millimeter-wave (mmWave) has the potential to greatly increase the capacity of 5G cellular systems. However, to overcome the high pathloss in the mmWave frequencies, beamforming with large antenna arrays is required at both the base station and user equipments for sufficient link budget. This feature is a challenge for beamforming training during initial access due to low SNR and poor synchronization. A recently developed compressive sensing (CS) based training algorithm exploits channel sparsity but it is vulnerable to phase error from poor synchronization. We propose a novel CS-based algorithm that tracks and compensates frequency offset and phase noise. Simulation results show that the proposed method improves achievable rate by 10 times compared with existing CS-based method during initial beamforming training.", "Initial access at millimeter wave frequencies is a challenging problem due to hardware non-idealities and low SNR measurements prior to beamforming. Prior work has exploited the observation that mmWave MIMO channels are sparse in the spatial angle domain and has used compressed sensing based algorithms for channel estimation. Most of them, however, ignore hardware impairments like carrier frequency offset and phase noise, and fail to perform well when such impairments are considered. In this paper, we develop a compressive channel estimation algorithm for narrowband mmWave systems, which is robust to such non idealities. We address this problem by constructing a tensor that models both the mmWave channel and CFO, and estimate the tensor while still exploiting the sparsity of the mmWave channel. Simulation results show that under the same settings, our method performs better than comparable algorithms that are robust to phase errors.", "There is much interest in integrating millimeter wave radios (mmWave) into wireless LANs and 5G cellular networks to benefit from their multi-GHz of available spectrum. Yet, unlike existing technologies, e.g., WiFi, mmWave radios require highly directional antennas. Since the antennas have pencil-beams, the transmitter and receiver need to align their beams before they can communicate. Existing systems scan the space to find the best alignment. Such a process has been shown to introduce up to seconds of delay, and is unsuitable for wireless networks where an access point has to quickly switch between users and accommodate mobile clients. This paper presents Agile-Link, a new protocol that can find the best mmWave beam alignment without scanning the space. Given all possible directions for setting the antenna beam, Agile-Link provably finds the optimal direction in logarithmic number of measurements. Further, Agile-Link works within the existing 802.11ad standard for mmWave LAN, and can support both clients and access points. We have implemented Agile-Link in a mmWave radio and evaluated it empirically. Our results show that it reduces beam alignment delay by orders of magnitude. In particular, for highly directional mmWave devices operating under 802.11ad, the delay drops from over a second to 2.5 ms.", "Millimeter (mm) wave picocellular networks have the potential for providing the 1000X capacity increase required to keep up with the explosive growth of mobile data. However, maintaining beams towards mobile users and adapting to frequent blockage, requires efficient, dynamic path tracking algorithms. In this paper, we develop and experimentally demonstrate a novel noncoherent compressive strategy for this problem, and compare it with conventional hierarchical and exhaustive beam scanning. To the best of our knowledge, this is the first experimental demonstration of practical, scalable path estimation for mmWave 60GHz picocells. Our results indicate the feasibility of sub-second path tracking with low overhead on today's mmWave hardware, and open up a rich space for design of 5G mmWave networks." ] }
1901.11188
2952085755
The vulnerability of neural networks under adversarial attacks has raised serious concerns and motivated extensive research. It has been shown that both neural networks and adversarial attacks against them can be sensitive to input transformations such as linear translation and rotation, and that human vision, which is robust against adversarial attacks, is invariant to natural input transformations. Based on these, this paper tests the hypothesis that model robustness can be further improved when it is adversarially trained against transformed attacks and transformation-invariant attacks. Experiments on MNIST, CIFAR-10, and restricted ImageNet show that while transformations of attacks alone do not affect robustness, transformation-invariant attacks can improve model robustness by 2.5 on MNIST, 3.7 on CIFAR-10, and 1.1 on restricted ImageNet. We discuss the intuition behind this phenomenon.
@cite_25 proposed using random transformations to pre-processing the input images to improve model robustness. It was later shown, however, that this approach creates a gradient masking effect and can be broken by robust attacks @cite_42 . Unlike @cite_25 , we consider the transformation as part of our model during the adversarial training process.
{ "cite_N": [ "@cite_42", "@cite_25" ], "mid": [ "2787708942", "2765384636" ], "abstract": [ "We identify obfuscated gradients as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe indicators of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining all defenses accepted to ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Using our new attack techniques, we successfully circumvent all 7 of them.", "This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60 of strong white-box and 90 of strong black-box attacks by a variety of major attack methods" ] }
1901.11188
2952085755
The vulnerability of neural networks under adversarial attacks has raised serious concerns and motivated extensive research. It has been shown that both neural networks and adversarial attacks against them can be sensitive to input transformations such as linear translation and rotation, and that human vision, which is robust against adversarial attacks, is invariant to natural input transformations. Based on these, this paper tests the hypothesis that model robustness can be further improved when it is adversarially trained against transformed attacks and transformation-invariant attacks. Experiments on MNIST, CIFAR-10, and restricted ImageNet show that while transformations of attacks alone do not affect robustness, transformation-invariant attacks can improve model robustness by 2.5 on MNIST, 3.7 on CIFAR-10, and 1.1 on restricted ImageNet. We discuss the intuition behind this phenomenon.
Attacks from an ensemble of black-box models have been used to effectively avoid gradient masking in one-step adversarial training @cite_28 . While our model also uses an ensemble of attacks, these attacks are white-box and multi-step. Importantly, these attacks do not cause gradient masking.
{ "cite_N": [ "@cite_28" ], "mid": [ "2620038827" ], "abstract": [ "Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks." ] }
1901.11167
2952180917
In the literature, tensors have been effectively used for capturing the context information in language models. However, the existing methods usually adopt relatively-low order tensors, which have limited expressive power in modeling language. Developing a higher-order tensor representation is challenging, in terms of deriving an effective solution and showing its generality. In this paper, we propose a language model named Tensor Space Language Model (TSLM), by utilizing tensor networks and tensor decomposition. In TSLM, we build a high-dimensional semantic space constructed by the tensor product of word vectors. Theoretically, we prove that such tensor representation is a generalization of the n-gram language model. We further show that this high-order tensor representation can be decomposed to a recursive calculation of conditional probability for language modeling. The experimental results on Penn Tree Bank (PTB) dataset and WikiText benchmark demonstrate the effectiveness of TSLM.
There have been tremendous research efforts in the field of statistical language modeling. Some earlier language models are based on the Markov assumption are represented by @math -gram models @cite_17 , where the prediction of the next word is often conditioned just on @math preceding words. For @math -gram models, Kneser and Ney proposed the most well-known KN smoothing method, and some researchers continued to improve the smoothing method, as well as introduced the low-rank model. Neural Probabilistic Language Model @cite_18 is to learn the joint probability function of sequence of words in a language, which shows the improvement on @math -gram models. Recently, RNN @cite_10 and Long Short-Term Memory (LSTM) networks @cite_1 achieve promising results on language model tasks.
{ "cite_N": [ "@cite_1", "@cite_18", "@cite_10", "@cite_17" ], "mid": [ "1519502414", "2132339004", "179875071", "2121227244" ], "abstract": [ "Artificial neural networks have become state-of-the-art in the task of language modelling on a small corpora. While feed-forward networks are able to take into account only a fixed context length to predict the next word, recurrent neural networks (RNN) can take advantage of all previous words. Due the difficulties in training of RNN, the way could be in using Long Short Term Memory (LSTM) neural network architecture.", "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition", "We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics." ] }
1901.10951
2914704920
For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths.
Object detection has been a major topic of computer vision research and over recent years a number of fully convolutional object detectors have been proposed. Two-stage methods, in particular Faster R-CNN @cite_2 , provide state-of-the-art performance but are computationally expensive. One-stage methods @cite_17 @cite_3 are structurally simpler and can to operate in real-time but suffer a performance penalty. To improve performance on difficult examples and bridge the performance gap between two-stage and one-stage detectors, Lin @cite_5 propose a loss function that focuses the loss on examples about which the classifier is least confident. This however relies on the labels being highly accurate which may not be the case if they are automatically generated.
{ "cite_N": [ "@cite_5", "@cite_17", "@cite_3", "@cite_2" ], "mid": [ "2743473392", "2193145675", "2963037989", "2613718673" ], "abstract": [ "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1901.10951
2914704920
For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths.
A common approach when training object detectors for a specific task is to pre-train a feature extractor using ImageNet @cite_18 and then fine tune the features with the limited training data available for the task. In @cite_10 Shen show that, given careful network design, it is possible to obtain state of the art results without this pre-training process. This implies that a fusion network, such as the one we are proposing, is not at an insurmountable disadvantage if pre-training is not performed.
{ "cite_N": [ "@cite_18", "@cite_10" ], "mid": [ "2117539524", "2963813458" ], "abstract": [ "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off the-shelf networks pre-trained on large-scale classification datasets like Image Net, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1 2 parameters to SSD and 1 10 parameters to Faster RCNN." ] }
1901.10951
2914704920
For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths.
A number of papers use automated methods for generating training labels. In @cite_1 , visual odometry from previous traversals is used to label driveable surfaces for semantic segmentation. Hoermann @cite_11 employs temporal consistency to generate labels by processing data both forwards and backwards in time. Recent work by Adhikari @cite_20 takes a labelling approach that is related to ours by also leveraging the power of an existing object detector to generate labelled data for a new task, although a small amount of manual labelling is still required.
{ "cite_N": [ "@cite_1", "@cite_20", "@cite_11" ], "mid": [ "2528537661", "2836694521", "2787797206" ], "abstract": [ "We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.", "This paper proposes an approach for rapid bounding box annotation for object detection datasets. The procedure consists of two stages: The first step is to annotate a part of the dataset manually, and the second step proposes annotations for the remaining samples using a model trained with the first stage annotations. We experimentally study which first second stage split minimizes to total workload. In addition, we introduce a new fully labeled object detection dataset collected from indoor scenes. Compared to other indoor datasets, our collection has more class categories, different backgrounds, lighting conditions, occlusion and high intra-class differences. We train deep learning based object detectors with a number of state-of-the-art models and compare them in terms of speed and accuracy. The fully annotated dataset is released freely available for the research community.", "We tackle the problem of object detection and pose estimation in a shared space downtown environment. For perception multiple laser scanners with 360 coverage were fused in a dynamic occupancy grid map (DOGMa). A single-stage deep convolutional neural network is trained to provide object hypotheses comprising of shape, position, orientation and an existence score from a single input DOGMa. Furthermore, an algorithm for offline object extraction was developed to automatically label several hours of training data. The algorithm is based on a two-pass trajectory extraction, forward and backward in time. Typical for engineered algorithms, the automatic label generation suffers from misdetections, which makes hard negative mining impractical. Therefore, we propose a loss function counteracting the high imbalance between mostly static background and extremely rare dynamic grid cells. Experiments indicate, that the trained network has good generalization capabilities since it detects objects occasionally lost by the label algorithm. Evaluation reaches an average precision (AP) of 75.9" ] }
1901.10951
2914704920
For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths.
Multi-modal object detection has been investigated in a number of works. In @cite_4 camera images are combined with both frontal and birds eye LIDAR views for 3D object detection. Data from cameras, LIDAR and radar are all fused in @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_4" ], "mid": [ "2281954672", "2555618208" ], "abstract": [ "The accurate detection and classification of moving objects is a critical aspect of advanced driver assistance systems. We believe that by including the object classification from multiple sensor detections as a key component of the object's representation and the perception process, we can improve the perceived model of the environment. First, we define a composite object representation to include class information in the core object's description. Second, we propose a complete perception fusion architecture based on the evidential framework to solve the detection and tracking of moving objects problem by integrating the composite representation and uncertainty management. Finally, we integrate our fusion approach in a real-time application inside a vehicle demonstrator from the interactIVe IP European project, which includes three main sensors: radar, lidar, and camera. We test our fusion approach using real data from different driving scenarios and focusing on four objects of interest: pedestrian, bike, car, and truck.", "This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods." ] }
1901.11117
2952355681
Recent works have highlighted the strength of the Transformer architecture on sequence tasks while, at the same time, neural architecture search (NAS) has begun to outperform human-designed models. Our goal is to apply NAS to search for a better alternative to the Transformer. We first construct a large search space inspired by the recent advances in feed-forward sequence models and then run evolutionary architecture search with warm starting by seeding our initial population with the Transformer. To directly search on the computationally expensive WMT 2014 English-German translation task, we develop the Progressive Dynamic Hurdles method, which allows us to dynamically allocate more resources to more promising candidate models. The architecture found in our experiments -- the Evolved Transformer -- demonstrates consistent improvement over the Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French, WMT 2014 English-Czech and LM1B. At a big model size, the Evolved Transformer establishes a new state-of-the-art BLEU score of 29.8 on WMT'14 English-German; at smaller sizes, it achieves the same quality as the original "big" Transformer with 37.6 less parameters and outperforms the Transformer by 0.7 BLEU at a mobile-friendly model size of 7M parameters.
RNNs have long been used as the default option for applying neural networks to sequence modeling @cite_16 @cite_31 , with LSTM @cite_10 and GRU @cite_20 architectures being the most popular. However, recent work has shown that RNNs are not necessary to build state-of-the-art sequence models. For example, many high performance convolutional models have been designed, such as WaveNet @cite_3 , Gated Convolution Networks @cite_34 , Conv Seq2Seq @cite_46 and Dynamic Lightweight Convolution model @cite_23 . Perhaps the most promising architecture in this direction is the Transformer architecture @cite_39 , which relies only on multi-head attention to convey spatial information. In this work, we use both convolutions and attention in our search space to leverage the strengths of both layer types.
{ "cite_N": [ "@cite_46", "@cite_3", "@cite_34", "@cite_39", "@cite_23", "@cite_31", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2964265128", "2519091744", "2963970792", "2963403868", "2908336025", "2964308564", "2130942839", "", "" ], "abstract": [ "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.*", "", "The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms (2016b) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.", "", "Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "", "" ] }
1901.10995
2914261249
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
Go-Explore is reminiscent of earlier work that separates exploration and exploitation (e.g. ), in which exploration follows a reward-agnostic Goal Exploration Process @cite_42 (an algorithm similar to novelty search @cite_46 ), from which experience is collected to prefill the replay buffer of an off-policy RL algorithm, in this case DDPG @cite_66 . This algorithm then extracts the highest-rewarding policy from the experience gathered. In contrast, Go-Explore further decomposes exploration into three elements: Accumulate stepping stones (interestingly different states), return to promising stepping stones, and explore from them in search of additional stepping stones (i.e. principles 1 and 2 above). The impressive results Go-Explore achieves by slotting in very simple algorithms for each element shows the value of this decomposition.
{ "cite_N": [ "@cite_46", "@cite_42", "@cite_66" ], "mid": [ "135283623", "2744921630", "2963864421" ], "abstract": [ "By synthesizing a growing body ofwork in search processes that are not driven by explicit objectives, this paper advances the hypothesis that there is a fundamental problem with the dominant paradigm of objective-based search in evolutionary computation and genetic programming: Most ambitious objectives do not illuminate a path to themselves. That is, the gradient of improvement induced by ambitious objectives tends to lead not to the objective itself but instead to deadend local optima. Indirectly supporting this hypothesis, great discoveries often are not the result of objective-driven search. For example, the major inspiration for both evolutionary computation and genetic programming, natural evolution, innovates through an open-ended process that lacks a final objective. Similarly, large-scale cultural evolutionary processes, such as the evolution of technology, mathematics, and art, lack a unified fixed goal. In addition, direct evidence for this hypothesis is presented from a recently-introduced search algorithm called novelty search. Though ignorant of the ultimate objective of search, in many instances novelty search has counter-intuitively outperformed searching directly for the objective, including a wide variety of randomly-generated problems introduced in an experiment in this chapter. Thus a new understanding is beginning to emerge that suggests that searching for a fixed objective, which is the reigning paradigm in evolutionary computation and even machine learning as a whole, may ultimately limit what can be achieved. Yet the liberating implication of this hypothesis argued in this paper is that by embracing search processes that are not driven by explicit objectives, the breadth and depth of what is reachable through evolutionary methods such as genetic programming may be greatly expanded.", "Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.", "Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs." ] }
1901.10995
2914261249
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
The aspect of Go-Explore of first finding a solution and then robustifying around it has precedent in Guided Policy Search @cite_19 . However, this method requires a non-deceptive, non-sparse, differentiable loss function to find solutions, meaning it cannot be applied directly to problems where rewards are discrete, sparse, or deceptive, as both Atari and many real-world problems are. Further, Guided Policy Search requires having a differentiable model of the world or learning a set of local models, which to be tractable requires the full state of the system to be observable during training time.
{ "cite_N": [ "@cite_19" ], "mid": [ "2964161785" ], "abstract": [ "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods." ] }
1901.10995
2914261249
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
The idea of planning (searching in a deterministic model of the world to find a good strategy) and then training a policy to mimic what was learned is reminiscent of . It plans (in the Atari emulator) with UCT @cite_100 @cite_22 @cite_73 , which is slow, and then trains a much faster policy with supervised learning to imitate the planning algorithm. At first glance it seems that in UCT serves a similar role to the exploration phase in Go-Explore, but UCT is quite different in several ways that make it inferior for domains that are either high-dimensional or hard-exploration. That is true even though UCT does have a form of exploration bonus.
{ "cite_N": [ "@cite_100", "@cite_73", "@cite_22" ], "mid": [ "1625390266", "2126316555", "183472599" ], "abstract": [ "For large state-space Markovian Decision Problems Monte-Carlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.", "Monte-Carlo search has been successful in many non-deterministic games, and recently in deterministic games with high branching factor. One of the drawbacks of the current approaches is that even if the iterative process would last for a very long time, the selected move does not necessarily converge to a game-theoretic optimal one. In this paper we introduce a new algorithm, UCT, which extends a bandit algorithm for Monte-Carlo search. It is proven that the probability that the algorithm selects the correct move converges to 1. Moreover it is shown empirically that the algorithm converges rather fast even in comparison with alpha-beta search. Experiments in Amazons and Clobber indicate that the UCT algorithm outperforms considerably a plain Monte-Carlo version, and it is competitive against alpha-beta based game programs." ] }
1901.10995
2914261249
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
UCT plans in a model of the world so as to decide on the next action to take in the real environment. An exploration bonus is used during the planning phase, but only extrinsic rewards are considered when choosing the next action to take. This approach can improve performance in domains with relatively dense rewards, but fails in sparse rewards domains as rewards are likely to be beyond the planning horizon of the algorithm. Once planning what to do from one state is done, an action is taken and the planning process is run again from the next state. UCT does not try to explore all states, and each run of UCT is independent of which states were visited in previous planning steps. As such, UCT (either within an episode, or across episodes) does not try to discover new terrain: instead its exploration bonus only helps it within the current short-horizon planning phase. As mentioned in , UCT scores 0 on Montezuma's Revenge and Pitfall @cite_91 @cite_60 .
{ "cite_N": [ "@cite_91", "@cite_60" ], "mid": [ "2150468603", "2401523698" ], "abstract": [ "In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.", "The Atari 2600 games supported in the Arcade Learning Environment [, 2013] all feature a known initial (RAM) state and actions that have deterministic effects. Classical planners, however, cannot be used off-the-shelf as there is no compact PDDL-model of the games, and action effects and goals are not known a priori. Indeed, there are no explicit goals, and the planner must select actions on-line while interacting with a simulator that returns successor states and rewards. None of this precludes the use of blind lookahead algorithms for action selection like breadth-first search or Dijkstra's yet such methods are not effective over large state spaces. We thus turn to a different class of planning methods introduced recently that have been shown to be effective for solving large planning problems but which do not require prior knowledge of state transitions, costs (rewards) or goals. The empirical results over 54 Atari games show that the simplest such algorithm performs at the level of UCT, the state-of-the-art planning method in this domain, and suggest the potential of width-based methods for planning with simulators when factored, compact action models are not available." ] }
1901.10995
2914261249
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
Another approach to planning is Fractal Monte Carlo (FMC) @cite_78 . When choosing the next action, it takes into account both the expected reward and novelty of that action, and in that way is more similar to Go-Explore. In FMC, a planning process is initiated from each state the agent visits. Planning is done within a deterministic version of the game emulator. A fixed number of workers are started in the state from which planning is occurring, and they perform random walks in state space. Periodically, workers that have accumulated lower reward and or are in less novel states are replaced by clones'' of more successful workers. Novelty is approximated as the Euclidean distance of the worker's state (in the original, raw, observation space) to that of a randomly selected other worker.
{ "cite_N": [ "@cite_78" ], "mid": [ "2791666116" ], "abstract": [ "Fractal AI is a theory for general artificial intelligence. It allows deriving new mathematical tools that constitute the foundations for a new kind of stochastic calculus, by modelling information using cellular automaton-like structures instead of smooth functions. In the repository included we are presenting a new Agent, derived from the first principles of the theory, which is capable of solving Atari games several orders of magnitude more efficiently than other similar techniques, like Monte Carlo Tree Search. The code provided shows how it is now possible to beat some of the current State of The Art benchmarks on Atari games, without previous learning and using less than 1000 samples to calculate each one of the actions when standard MCTS uses 3 Million samples. Among other things, Fractal AI makes it possible to generate a huge database of top performing examples with a very little amount of computation required, transforming Reinforcement Learning into a supervised problem. The algorithm presented is capable of solving the exploration vs exploitation dilemma on both the discrete and continuous cases, while maintaining control over any aspect of the behaviour of the Agent. From a general approach, new techniques presented here have direct applications to other areas such as Non-equilibrium thermodynamics, chemistry, quantum physics, economics, information theory, and non-linear control theory." ] }
1901.10995
2914261249
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
On Pitfall, SOORL @cite_51 was the first planning algorithm to achieve a non-zero score, but did so in a deterministic test environment. It does so through a combination of learning a model of the environment, domain knowledge, and a value function that is optimistic about the value of unseen states, thus effectively providing an exploration bonus. At the end of 50 episodes of training, which was the maximum reported number of episodes, SOORL achieves an average of about 200 points across runs, and its best run scored an average of 606.6 with a maximum of 4,000.
{ "cite_N": [ "@cite_51" ], "mid": [ "2901269338" ], "abstract": [ "Humans learn to play video games significantly faster than the state-of-the-art reinforcement learning (RL) algorithms. People seem to build simple models that are easy to learn to support planning and strategic exploration. Inspired by this, we investigate two issues in leveraging model-based RL for sample efficiency. First we investigate how to perform strategic exploration when exact planning is not feasible and empirically show that optimistic Monte Carlo Tree Search outperforms posterior sampling methods. Second we show how to learn simple deterministic models to support fast learning using object representation. We illustrate the benefit of these ideas by introducing a novel algorithm, Strategic Object Oriented Reinforcement Learning (SOORL), that outperforms state-of-the-art algorithms in the game of Pitfall! in less than 50 episodes." ] }
1901.10912
2914607694
We propose to meta-learn causal structures based on how fast a learner adapts to new distributions arising from sparse distributional changes, e.g. due to interventions, actions of agents and other sources of non-stationarities. We show that under this assumption, the correct causal structural choices lead to faster adaptation to modified distributions because the changes are concentrated in one or just a few mechanisms when the learned knowledge is modularized appropriately. This leads to sparse expected gradients and a lower effective number of degrees of freedom needing to be relearned while adapting to the change. It motivates using the speed of adaptation to a modified distribution as a meta-learning objective. We demonstrate how this can be used to determine the cause-effect relationship between two observed variables. The distributional changes do not need to correspond to standard interventions (clamping a variable), and the learner has no direct knowledge of these interventions. We show that causal structures can be parameterized via continuous variables and learned end-to-end. We then explore how these ideas could be used to also learn an encoder that would map low-level observed variables to unobserved causal variables leading to faster adaptation out-of-distribution, learning a representation space where one can satisfy the assumptions of independent mechanisms and of small and sparse changes in these mechanisms due to actions and non-stationarities.
Approaches for Bayesian network structure learning based on discrete search over model structures and simulated annealing are reviewed in . There, it has been common to use Minimum Description Length (MDL) principles to score and search over models , or the Bayesian Information Criterion (BIC) to search for models with high relative posterior probability @cite_2 . Prior work such as has also relied upon purely observational data, without the possibility of interventions and therefore focused on learning likelihood or hypothesis equivalence classes for network structures. Since then, numerous methods have also been devised to infer the causal direction from purely observational data , based on specific, generally parametric assumptions, on the underlying causal graph. Pearl's seminal work on do-calculus @cite_5 @cite_1 @cite_3 lays a foundation for expressing the impact of interventions on probabilistic graphical models -- we use it in our work. In contrast, here we are proposing a meta-learning objective function for learning causal structure, not requiring any specific constraints on causal graph structure, only on the sparsity of the changes in distribution in the correct causal graph parametrization.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "2049910836", "", "2466989778", "2170112109" ], "abstract": [ "SUMMARY The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained.", "", "We review concepts, principles, and tools that unify current approaches to causal analysis and attend to new challenges presented by big data. In particular, we address the problem of data fusion—piecing together multiple datasets collected under heterogeneous conditions (i.e., different populations, regimes, and sampling methods) to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities to big data analysts, because the knowledge that can be acquired from combined data would not be possible from any individual source alone. However, the biases that emerge in heterogeneous environments require new analytical tools. Some of these biases, including confounding, sampling selection, and cross-population biases, have been addressed in isolation, largely in restricted parametric models. We here present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks.", "We describe a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data. First and foremost, we develop a methodology for assessing informative priors needed for learning. Our approach is derived from a set of assumptions made previously as well as the assumption of likelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence. We show that likelihood equivalence when combined with previously made assumptions implies that the user's priors for network parameters can be encoded in a single Bayesian network for the next case to be seen—a prior network—and a single measure of confidence for that network. Second, using these priors, we show how to compute the relative posterior probabilities of network structures given data. Third, we describe search methods for identifying network structures with high posterior probabilities. We describe polynomial algorithms for finding the highest-scoring network structures in the special case where every node has at most k e 1 parent. For the general case (k > 1), which is NP-hard, we review heuristic search algorithms including local search, iterative local search, and simulated annealing. Finally, we describe a methodology for evaluating Bayesian-network learning algorithms, and apply this approach to a comparison of various approaches." ] }
1907.10471
2962991329
We present a new two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point cloud as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a high recall with less computation compared with prior works. Then, PointsPool is applied for generating proposal features by transforming their interior point features from sparse expression to compact representation, which saves even more computation time. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method in terms of 3D object and Bird's Eye View (BEV) detection. Our method outperforms other state-of-the-arts by a large margin, especially on the hard set, with inference speed more than 10 FPS.
There are several approaches to tackle semantic segmentation on point cloud. In @cite_5 , a projection function converts LiDAR points to a UV map, which is then classified by 2D semantic segmentation @cite_5 @cite_7 @cite_3 in pixel level. In @cite_11 @cite_21 , a multi-view-based function produces the segmentation mask. This method fuses information from different views. Other solutions, such as @cite_2 @cite_1 @cite_4 @cite_6 @cite_27 , segment point cloud from raw LiDAR data. They directly generate features on each point while keeping original structural information. A max-pooling method gathers the global feature. It is then concatenated with local feature for processing.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_27", "@cite_2", "@cite_5", "@cite_11" ], "mid": [ "", "2560023338", "2594519801", "2560609797", "2412782625", "2810641456", "2963719584", "2963121255", "2766577666", "2795014656" ], "abstract": [ "", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available – current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Recently, 3D understanding research sheds light on extracting features from point cloud directly, which requires effective shape pattern description of point clouds. Inspired by the outstanding 2D shape descriptor SIFT, we design a module called PointSIFT that encodes information of different orientations and is adaptive to scale of shape. Specifically, an orientation-encoding unit is designed to describe eight crucial orientations, and multi-scale representation is achieved by stacking several orientation-encoding units. PointSIFT module can be integrated into various PointNet-based architecture to improve the representation ability. Extensive experiments show our PointSIFT-based framework outperforms state-of-the-art method on standard benchmark datasets. The code and trained model will be published accompanied by this paper.", "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "In this paper, we address semantic segmentation of road-objects from 3D LiDAR point clouds. In particular, we wish to detect and categorize instances of interest, such as cars, pedestrians and cyclists. We formulate this problem as a point- wise classification problem, and propose an end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Instance-level labels are then obtained by conventional clustering algorithms. Our CNN model is trained on LiDAR point clouds from the KITTI dataset, and our point-wise segmentation labels are derived from 3D bounding boxes from KITTI. To obtain extra training data, we built a LiDAR simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize large amounts of realistic training data. Our experiments show that SqueezeSeg achieves high accuracy with astonishingly fast and stable runtime (8.7 ms per frame), highly desirable for autonomous driving applications. Furthermore, additionally training on synthesized data boosts validation accuracy on real-world data. Our source code and synthesized data will be open-sourced.", "We present 3DMV, a novel method for 3D semantic scene segmentation of RGB-D scans in indoor environments using a joint 3D-multi-view prediction network. In contrast to existing methods that either use geometry or RGB data as input for this task, we combine both data modalities in a joint, end-to-end network architecture. Rather than simply projecting color data into a volumetric grid and operating solely in 3D – which would result in insufficient detail – we first extract feature maps from associated RGB images. These features are then mapped into the volumetric feature grid of a 3D network using a differentiable back-projection layer. Since our target is 3D scanning scenarios with possibly many frames, we use a multi-view pooling approach in order to handle a varying number of RGB input views. This learned combination of RGB and geometric features with our joint 2D-3D architecture achieves significantly better results than existing baselines. For instance, our final result on the ScanNet 3D segmentation benchmark increases from 52.8 to 75 accuracy compared to existing volumetric architectures." ] }
1907.10471
2962991329
We present a new two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point cloud as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a high recall with less computation compared with prior works. Then, PointsPool is applied for generating proposal features by transforming their interior point features from sparse expression to compact representation, which saves even more computation time. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method in terms of 3D object and Bird's Eye View (BEV) detection. Our method outperforms other state-of-the-arts by a large margin, especially on the hard set, with inference speed more than 10 FPS.
For multi-view methods, MV3D @cite_0 projects LiDAR point cloud to BEV and trains a Region Proposal Network (RPN) to generate positive proposals. It merges features from BEV, image view and front view in order to generate refined 3D bounding boxes. AVOD @cite_18 improves MV3D by fusing image and BEV features like @cite_33 . Unlike MV3D, which only merges features in the refinement phase, it also merges features from multiple views in the RPN phase to generate positive proposals. These methods still have the limitation when detecting small objects such as pedestrians and cyclists. They do not deal with cases with multiple objects in depth direction.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_33" ], "mid": [ "2555618208", "2774996270", "2565639579" ], "abstract": [ "This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.", "We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: this https URL", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available." ] }
1907.10599
2963790895
Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the *Conjugate Kernel*, CK, (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel*, NTK. Roughly, the CK and the NTK tell us respectively "what a network looks like at initialization"and "what a network looks like during and after training." Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at this http URL.
This signal propagation perspective can be refined via random matrix theory @cite_25 @cite_24 . In these works, free probability is leveraged to compute the singular value distribution of the input-output map given by the random neural network, as the input dimension and width tend to infinity together. Other works also investigate various questions of neural network training and generalization from the random matrix perspective .
{ "cite_N": [ "@cite_24", "@cite_25" ], "mid": [ "2789210533", "2963570896" ], "abstract": [ "Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network's input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network's Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.", "It is well known that weight initialization in deep networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network's input-output Jacobian is O(1) is essential for avoiding exponentially vanishing or exploding gradients. Moreover, in deep linear networks, ensuring that all singular values of the Jacobian are concentrated near 1 can yield a dramatic additional speed-up in learning; this is a property known as dynamical isometry. However, it is unclear how to achieve dynamical isometry in nonlinear deep networks. We address this question by employing powerful tools from free probability theory to analytically compute the entire singular value distribution of a deep network's input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of nonlinearity. Intriguingly, we find that ReLU networks are incapable of dynamical isometry. On the other hand, sigmoidal networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep nonlinear networks achieving dynamical isometry learn orders of magnitude faster than networks that do not. Indeed, we show that properly-initialized deep sigmoidal networks consistently outperform deep ReLU networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning." ] }