reference
stringlengths
376
444k
target
stringlengths
31
68k
Literature survey on low rank approximation of matrices <s> Adaptive Cross Approximation (ACA) <s> An introduction to tensor product spaces.- Proximinality.- The alternating algorithm.- Central proximity maps.- The diliberto-straus algorithm in C(S x T).- The algorithm of von golitschek.- The L 1-version of the diliberto-straus algorithm.- Estimates of projection constants.- Minimal projections.- Appendix on the bochner integral.- Appendix on miscellaneous results in banach spaces. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Adaptive Cross Approximation (ACA) <s> Summary. This article considers the problem of approximating a general asymptotically smooth function in two variables, typically arising in integral formulations of boundary value problems, by a sum of products of two functions in one variable. From these results an iterative algorithm for the low-rank approximation of blocks of large unstructured matrices generated by asymptotically smooth functions is developed. This algorithm uses only few entries from the original block ::: and since it has a natural stopping criterion the approximative rank is not needed in advance. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Adaptive Cross Approximation (ACA) <s> In this article we present and analyze a new scheme for the approximation of multivariate functions (d=3,4) by sums of products of univariate functions. The method is based on the Adaptive Cross Approximation (ACA) initially designed for the approximation of bivariate functions. To demonstrate the linear complexity of the schemes, we apply it to large-scale multidimensional arrays generated by the evaluation of functions. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Adaptive Cross Approximation (ACA) <s> In this article we present a generalized version of the Cross Approximation for 3d-tensors. The given tensor $${a\in\mathbb{R}^{n\times n\times n}}$$ is represented as a matrix of vectors and 2d adaptive Cross Approximation is applied in a nested way to get the tensor decomposition. The main focus lies on theoretical issues of the construction such as the desired interpolation property or the explicit formulas for the vectors in the decomposition. The computational complexity of the proposed algorithm is shown to be linear in n. <s> BIB004
Adaptive cross approximation has been introduced in BIB003 BIB002 . In contrast to the pseudoskeleton method here the rows and columns are chosen adaptively such that in each step a rank one approximation is added to the approximant. We try to keep the notation used when it was first investigated thoroughly, so we prefer the language of functions here instead of matrices identifying both by a regular, sufficiently dense grid in We are basically concerned with the questions: How to approximate f by something like i.e., by a finite sum of tensor products of one-dimensional functions (here we write (g ⊗ h)(x, y) = g(x)h(y))? And how good is this approximation? The first famous result in this direction is due to Schmidt , who gave a complete answer in the case f ∈ L 2 . A standard reference for questions in this area is BIB001 , a nice survey can be found in . Now a very special choice of functions g, h in (16) is considered, namely the restriction of f itself to certain lines. In the discrete setting, that means we only allow columns and rows of the matrix A to be building blocks. Let f : [0, 1] 2 −→ R, then the recursion R 0 (x, y) = f (x, y) and with points 1 ≤ x k , y k ≤ n chosen such that R k−1 (x k , y k ) = 0, is the heart of the two-dimensional cross approximation, compare BIB002 . So in each step a pivot (x k , y k ) with f (x k , y k ) = 0 is chosen (the process of choosing these pivots is called pivoting) and the corresponding row and column is used to add another rank 1 approximation for the remainder R k−1 . After k iteration steps the approximant S k (x, y) = f (x, y) − R k (x, y) is calculated. S k (x, y) takes the form BIB003 BIB002 where (M k ) ij = f (x i , y j ), i, j = 1, 2, .., k. In the matrix form, the columns of C and rows of R are iteratively added and the approximation of A n×n takes the form (compare with pseudoskeleton approximation) A ≃ CGR, where The cross approximation has nice properties like interpolation property and rank property. For any function f : [0, 1] 2 → R, we have the interpolation property as long as x = x i ∨ y = y i for at least one i ∈ {1, . . . , n}. That means, on the chosen crosses this procedure is exact. The next result takes an a priori knowledge about structural properties of the underlying function into account. We say that a function f has separation rank k, if one can represent it as and there is no such representation with reduced summing order. This is just the continuous analog to the rank of a matrix. We call the following the rank property: If f has separation rank k cross approximation reproduces f after k steps exactly, that means A matrix version of this result was first proved in BIB002 (Lemma 7). There also has been some effort to the error analysis: In was stated where E(f, G) C([0,1] 2 ) = inf g∈G f − g ∞ is the error of best approximation of f in measured in the sup-norm (compare with the matrix version of this result stated at the end of section 3.1). Similar results can be found in BIB003 . In (18) a very special choice of pivots is crucial, namely the maximal volume concept, i.e. (x 1 , y 1 ) , . . . , (x k , y k ) are chosen, such that det (f (x i , y j )) is maximal under all possible choices of the points. This is of course not practical and since one wants to keep the pivots of the previous steps untouched, a good alternative is the partial pivoting. Here the positions in one direction are chosen by some strategy (for example completely random) and in the second direction the maximum in modulus of the remainder is taken on a line. This is still inexpensive and leads to good numerical results, see BIB003 BIB002 BIB004 . Practically it is expensive to update the whole remainder at each iteration step. As described in BIB003 BIB002 the approximation of the form S k = k i=1 α i u i (x)v i (y) can be obtained without updating the whole remainder. If we set u k (x) = R k−1 (x, y k ) and v k (y) = R k−1 (x k , y), then using (17) we get . . . where we realize that it has the desired tensor product structure of separated variables (compare with (16)). By a similar calculation one can even derive the explicit formulas and see also BIB003 . For matrices that means instead of storing n 2 values one needs to call the function less than 2kn times and store this reduced amount of data. Here we describe the construction of rank-k approximation of the matrix A = (a ij ) using ACA (described in BIB003 BIB002 ).
Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> Summary. This article considers the problem of approximating a general asymptotically smooth function in two variables, typically arising in integral formulations of boundary value problems, by a sum of products of two functions in one variable. From these results an iterative algorithm for the low-rank approximation of blocks of large unstructured matrices generated by asymptotically smooth functions is developed. This algorithm uses only few entries from the original block ::: and since it has a natural stopping criterion the approximative rank is not needed in advance. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> It is well known that the classical boundary-element method (BEM) yields fully populated matrices. Their manipulation is cumbersome with respect to memory consumption and computational costs. This paper describes a novel approach where the matrices are split into collections of blocks of various sizes. Those blocks which describe remote interactions are adaptively approximated by low rank submatrices. This procedure reduces the algorithmic complexity for matrix setup and matrix-by-vector products to approximately O(N). The proposed method has been examined in a testing environment and implemented into an existing BEM-finite-element method (FEM) code for electromagnetic and electromechanical problems. The advantages of the new method are demonstrated by means of several examples. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> This article deals with the solution of integral equations using collocation methods with almost linear complexity. Methods such as fast multipole, panel clustering and H-matrix methods gain their efficiency from approximating the kernel function. The proposed algorithm which uses the H-matrix format, in contrast, is purely algebraic and relies on a small part of the collocation matrix for its blockwise approximation by low-rank matrices. Furthermore, a new algorithm for matrix partitioning that significantly reduces the number of blocks generated is presented. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> This paper presents the adaptive cross approximation (ACA) algorithm to reduce memory and CPU time overhead in the method of moments (MoM) solution of surface integral equations. The present algorithm is purely algebraic; hence, its formulation and implementation are integral equation kernel (Green's function) independent. The algorithm starts with a multilevel partitioning of the computational domain. The interactions of well-separated partitioning clusters are accounted through a rank-revealing LU decomposition. The acceleration and memory savings of ACA come from the partial assembly of the rank-deficient interaction submatrices. It has been demonstrated that the ACA algorithm results in O(NlogN) complexity (where N is the number of unknowns) when applied to static and electrically small electromagnetic problems. In this paper the ACA algorithm is extended to electromagnetic compatibility-related problems of moderate electrical size. Specifically, the ACA algorithm is used to study compact-range ground planes and electromagnetic interference and shielding in vehicles. Through numerical experiments, it is concluded that for moderate electrical size problems the memory and CPU time requirements for the ACA algorithm scale as N/sup 4/3/logN. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> Boundary Integral Equations.- Boundary Element Methods.- Approximation of Boundary Element Matrices.- Implementation and Numerical Examples. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> We consider Tucker-like approximations with an $r \times r \times r$ core tensor for three-dimensional $n \times n \times n$ arrays in the case of $r \ll n$ and possibly very large $n$ (up to $10^4$-$10^6$). As the approximation contains only $\mathcal{O}(rn + r^3)$ parameters, it is natural to ask if it can be computed using only a small amount of entries of the given array. A similar question for matrices (two-dimensional tensors) was asked and positively answered in [S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin, A theory of pseudo-skeleton approximations, Linear Algebra Appl., 261 (1997), pp. 1-21]. In the present paper we extend the positive answer to the case of three-dimensional tensors. More specifically, it is shown that if the tensor admits a good Tucker approximation for some (small) rank $r$, then this approximation can be computed using only $\mathcal{O}(nr)$ entries with $\mathcal{O}(nr^{3})$ complexity. <s> BIB006 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> A recently developed cross 3d algorithm is applied to approximation of the electron density function. The algorithm is proved to be fast and reliable on a sample of quantum chemistry data produced by the MOLPRO package. AMS Subject Classification: 65F30, 65F50, 65N35, 65F10 <s> BIB007 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> In this article we present and analyze a new scheme for the approximation of multivariate functions (d=3,4) by sums of products of univariate functions. The method is based on the Adaptive Cross Approximation (ACA) initially designed for the approximation of bivariate functions. To demonstrate the linear complexity of the schemes, we apply it to large-scale multidimensional arrays generated by the evaluation of functions. <s> BIB008 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> In this article we present a generalized version of the Cross Approximation for 3d-tensors. The given tensor $${a\in\mathbb{R}^{n\times n\times n}}$$ is represented as a matrix of vectors and 2d adaptive Cross Approximation is applied in a nested way to get the tensor decomposition. The main focus lies on theoretical issues of the construction such as the desired interpolation property or the explicit formulas for the vectors in the decomposition. The computational complexity of the proposed algorithm is shown to be linear in n. <s> BIB009 </s> Literature survey on low rank approximation of matrices <s> Details and operation count of the algorithm <s> This article presents a fast solver for the dense "frontal" matrices that arise from the multifrontal sparse elimination process of 3D elliptic PDEs. The solver relies on the fact that these matrices can be efficiently represented as a hierarchically off-diagonal low-rank (HODLR) matrix. To construct the low-rank approximation of the off-diagonal blocks, we propose a new pseudo-skeleton scheme, the boundary distance low-rank approximation, that picks rows and columns based on the location of their corresponding vertices in the sparse matrix graph. We compare this new low-rank approximation method to the adaptive cross approximation (ACA) algorithm and show that it achieves better speedup specially for unstructured meshes. Using the HODLR direct solver as a preconditioner (with a low tolerance) to the GMRES iterative scheme, we can reach machine accuracy much faster than a conventional LU solver. Numerical benchmarks are provided for frontal matrices arising from 3D finite element problems corresponding to a wide range of applications. <s> BIB010
We choose y 1 randomly at the beginning. In the first step we choose the column vector u 1 with entries u 1 (i) = A(i, y 1 ), i = 1, 2, .., n and find the maximum element index from |u 1 | . Let the maximum element index from |u 1 | be x 1 and let δ 1 = u 1 (x 1 ). Now find the corresponding row vector v 1 with entries v 1 (i) = A(x 1 , i), i = 1, 2, .., n. Now we find the maximum element index from |v 1 | (the index should not be y 1 ) and let it be y 2 . In the second step (here we know the pivot element y 2 ), we find the vector u 2 with entries (see equation u 2 (i) = A(i, y 2 ) − ((u 1 (i)v 1 (y 2 )) /δ 1 i = 1, 2, ..., n. Let the maximum element index in |u 2 | (the index should not be x 1 ) be x 2 and let δ 2 = u 2 (x 2 ). Now it is easy to find the vector v 2 with (see equation 21) v 2 (i) = A(x 2 , i) − (u 1 (x 2 )v 1 (i)) /δ 1 , i = 1, 2, ..., n. So, here we are doing n multiplications, n subtractions and 1 division for u 2 and v 2 respectively. The total number of operations at second step are 4n + 2. Let y 3 3 be the maximum element index in |v 2 .| In the third step, we find the vector u 3 corresponding to known y 3 . Therefore the vector u 3 (i) = A(i, y 3 ) − ((u 1 (i)v 1 (y 3 )) /δ 1 − (u 2 (i)v 2 (y 3 )) /δ 2 , i = 1, 2, .., n. Let the maximum element index from |u 3 | be x 3 and denote δ 3 = u 3 (x 3 ). Then the entries of v 3 are given by v 3 (i) = A(x 3 , i) − ((u 1 (x 3 )v 1 (i)) /δ 1 − (u 2 (x 3 )v 2 (i)) /δ 2 , i = 1, 2, .., n. So, here we are doing 2n multiplications, n additions, n subtractions and 2 divisions for u 3 and v 3 respectively. The total number of operations at this step are 8n + 4. Similarly we can find for other chosen rows/columns. Therefore at the k'th step, we do (k−1)(4n+2) operations. The total number of operations in the algorithm is 0 + 4n + 2 + 8n + 4 + · · · + (k − 1)(4n + 2) = (4n + 2)(1 + 2 + · · · + k − 1) = (4n + 2) (k − 1)k 2 = (2n + 1)(k 2 − k) Therefore, the complexity of this algorithm is O(k 2 n), which is linear in n (k ≪ n). In particular, if a matrix is of dimension m × n, the number of operations required to construct O(k 2 (m + n)), while the storage required for the approximation S k is of order k(m + n). An algorithm called Cross-2D has been described in BIB006 . The extension of adaptive cross approximation to higher order tensors has been discussed in BIB008 BIB009 and different applications of ACA can be found in BIB010 BIB001 BIB003 BIB007 BIB002 BIB005 BIB004 . We conclude this section with a remark on the performance of cross/skeleton approximation techniques on the matrices with non smooth data. These matrices generally arises from the discretization of the singular functions in some applications BIB008 BIB001 BIB006 . The cross/skeleton approximation algorithms require larger ranks to approximate the matrices with non smooth data BIB008 BIB006 compared to the ranks required to approximate the matrices with smooth data.
Verifying Linearisability: A Comparative Survey <s> INTRODUCTION <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> INTRODUCTION <s> List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked). <s> BIB002
Highly optimised fine-grained concurrent algorithms are increasingly being used to implement concurrent objects for modern multi/many-core applications due to the performance advantages they provide over their coarse-grained counterparts. Due to their complexity, correctness of such algorithms is notoriously difficult to judge. Formal verification has uncovered subtle bugs in published algorithms that were previously considered correct . The main correctness criterion for concurrent algorithms is linearisability, which defines consistency for the history of invocation and response events generated by an execution of the algorithm at hand BIB001 . Linearisability requires every operation call to take This research is supported by EPSRC Grant EP/J003727/1. Authors' addresses: B. Dongol, Department of Computer Science, Brunel University London, UK; email: Brijesh.Dongol@brunel.ac.uk; J. Derrick, Department of Computer Science, University of Sheffield, UK; email: j.derrick@dcs.shef.ac.uk. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org. c 2015 ACM 0360-0300/2015/09-ART19 $15.00 DOI: http://dx.doi. org/10.1145/2796550 effect at some point between its invocation and response events. Thus, concurrent operation calls may take effect in any order, but nonoverlapping operation calls must take effect in their real-time order. A (concurrent) history is linearisable if and only if there is some order for the effects of the operation calls that corresponds to a valid sequential history, where valid means that the sequential history can be generated by an execution of the sequential specification object. A concurrent object is lineasizable if and only if each of its histories is linearizable. Scalability of the proof methods for verifying linearisability remains a challenge, and hence, an immense amount of research effort has been devoted to this problem. Unfortunately, each new method is developed with respect to a specialised formal framework, making it difficult to judge the merits of the different proof methods. Therefore, we present a comparative survey of the major techniques for verifying linearisability to examine the advantage of each method. We aim to make our comparison comprehensive, but with the scale of development in this area, it is inevitable that some published methods for verifying linearisability will be left out. Our survey does not aim to be comprehensive about fine-grained algorithms, or about the sorts of properties that these algorithms possess; for this, and are already excellent resources. Instead, this survey is aimed at improving one's understanding of the fundamental challenges of linearisability verification and identifying avenues of future work. Some questions to be asked about the different methods are as follows: Most verification techniques involve identification of a lineasization point for each operation, which is an atomic statement of the algorithm implementing the concurrent object whose execution causes the effect of the operation to take place, that is, executing a linearisation point has the same effect as executing the corresponding abstract operation. It turns out that identification of linearisation points is a nontrivial task. Some algorithms have simple fixed linearisation points, others have external linearization points that are determined by the execution of other operations, while other yet more complex algorithms have external linearisation points that potentially modify the state representation of the concurrent object. We therefore consider three case studies for comparison that are increasingly more difficult to verify-(1) an optimistic set with operations add and remove, both of which have fixed linearisation points; (2) a lazy set BIB002 , which is the optimistic set together with a wait-free contains operation that may be linearised externally; and (3) Herlihy and Wing's array-based queue BIB001 , with future-dependent linearisation points. This article is structured as follows. In Section 2, we present the intuition behind linearisability as well as its formal definition using Herlihy and Wing's original nomenclature. In Section 3, we present an overview of the different methods that have been developed for verifying linearisability, which includes simulation, data refinement, auxiliary variables, shape analysis, etc. Sections 4, 5, and 6 present our case studies, where we consider algorithms for each of the different types of linearisation points.
Verifying Linearisability: A Comparative Survey <s> Example: The Treiber Stack <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Example: The Treiber Stack <s> Linearizability is one of the main correctness criteria for implementations of concurrent data structures. A data structure is linearizable if its operations appear to execute atomically. Verifying linearizability of concurrent unbounded linked data structures is a challenging problem because it requires correlating executions that manipulate (unbounded-size) memory states. We present a static analysis for verifying linearizability of concurrent unbounded linked data structures. The novel aspect of our approach is the ability to prove that two (unboundedsize) memory layouts of two programs are isomorphic in the presence of abstraction. A prototype implementation of the analysis verified the linearizability of several published concurrent data structures implemented by singly-linked lists. <s> BIB002
Figure 1 presents a simple nonblocking stack example due to , which has become a standard case study in the literature. The version we use assumes garbage collection to avoid the so-called ABA problem ], where changes to shared pointers may go undetected due to the value changing from some value A to another value B then back to A. Without garbage collection, additional complexities such as version numbers for pointers must be introduced; such details are elided in this article. Treiber's stack algorithm (Figure 1 ) implements the abstract stack in Figure 2 , where brackets " " and " " are used to delimit sequences, " " to denote the empty sequence, and " " to denote sequence concatenation. The abstract stack consists of a shared sequence of elements S together with two operations push (that pushes its input v = empty onto the top of S) and pop (that returns empty and leaves S unchanged when S is empty, and removes one element from the top of S and returns this top element otherwise). Concurrent data structures (or more generally, concurrent objects) are typically realised as part of a system library, which are instantiated in a client program, and thus the operations are assumed to be invoked by client processes. For reasoning purposes, one typically thinks of an object as being executed by a most general client, which ignores the behaviour of the clients themselves. A most general client formalises Herlihy and Wing's BIB001 requirement that each process calls at most one operation of the object it uses at a time. For example, a most general client process of a stack BIB002 ] is given in Figure 3 , where the ? test in the if is used to model nondeterministic choice and rand() is assumed to return a randomly chosen nonempty element. Usage of a most general client for verification was, however, proposed in much earlier work ]. The implementation (Figure 1 ) has fine-grained atomicity. Synchronisation is achieved using an atomic Compare-and-Swap (CAS) operation, which takes as input a (shared) variable gv, an expected value lv, and a new value nv. CAS(gv, lv, nv) = atomic { if (gv = lv) then gv := nv ; return true else return false }. In a single atomic step, the CAS operation compares gv to lv, potentially updates gv to nv, and returns a boolean. In particular, if gv = lv, it updates gv to nv and returns true (to indicate that the update was successful), otherwise it leaves everything unchanged and returns false. The CAS instruction is natively supported by most mainstream hardware architectures. Operations that use CAS typically have a try-retry structure with a loop that stores (shared variable) gv locally in lv, performs some calculations on lv to obtain nv (a new value for gv), then uses a CAS to attempt an update to gv. If the CAS fails, there must have been some interference on gv since it was stored locally at the start of the loop, and in this case the operation retries by rereading gv. We now explain the (concrete) program in Figure 1 , whose operations both have the try-retry structure explained previously. The concrete push operation first creates a new node with the value to be pushed onto the stack (H1 and H2). It then repeatedly sets a local variable ss to Head (H4) and the pointer of the new node to ss (H5) until the CAS succeeds (H6), which means Head (still) equalled ss and has atomically been set to the new node n (H6). Note that the CAS in push does not necessarily succeed: in the case of a concurrent push or pop operation, Head might have been changed between taking the snapshot of Head at H4 and execution of the CAS at H6. The concrete pop operation has a similar structure: it records the value of Head in ss (P2), and returns empty if ss = null (P4). Otherwise, the next node is stored in ssn (P5), the return value is stored in lv (P6), and a CAS is executed to attempt to update Head (P7). If this CAS succeeds, the pop takes effect and the output value lv is returned (P8), otherwise, pop repeats its steps loading a new value of Head. The linearisation points of the Treiber stack are as follows. The push operation linearises when the CAS at H6 is successful as this is the transition that adds an element onto the top of the stack. The pop operation has two linearisation points depending on the value returned: if the stack is empty, the linearisation point is the statement labelled P2, when Head = null is read, otherwise, the linearisation point is a successful execution of the CAS at P7. Note that P3 is not a linearisation point for an empty stack as the test only checks local variable ss-the global variable Head might be nonnull again at this point. Notice, also, that this example illustrates the fact that different statements may qualify as a linearisation point depending on the values returned. In the pop operation, the location of the linearisation point depends on whether or not the stack is empty. A possible execution of the Treiber stack (by a most general client) is given in Figure 4 , which depicts invocation (e.g., push I p (b)), response (e.g., push R p ), and internal transitions of operations push p (a), push q (b), and pop r : b, by processes p, q, and r. A cross on a transition arrow is used to denote the linearization points. Although the three operations execute concurrently by interleaving their statements, the order of linearisation points allows one to determine a sequential order for the operations. Importantly, this order conforms to a valid execution of the stack from Figure 2.
Verifying Linearisability: A Comparative Survey <s> Formalising Linearisability <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Formalising Linearisability <s> Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities. <s> BIB002
Although we have motivated our discussion of linearisability in terms of the order of linearisation points, and these being consistent with an abstract counterpart, we have to relate this view to what is observable in a program. In particular, what is taken to be observable are the histories, which are sequences of invocation and response events of operation calls on an object. This represents the interaction between an object and its client via the object's external interface. Thus, in Figure 4 , the internal transitions (including linearisation points) are not observable. Each observable event records the calling process (of type P), the operation that is executed (of type O), and any input/output parameters of the event (of type V ). Thus, we define BIB002 ] For brevity, we use notation op I p (x) and op R p : r for events inv ( p, op, x) and ret ( p, op, r) , respectively, and use op I p and op R p to respectively denote invocation and return events with no inputs or outputs. For an event e = ( p, op, x), we assume the existence of projection functions π i (e) that return the ith component of a tuple, for example, The definition of linearisability is formalised in terms of the history of events, which is represented formally by a sequence. Namely, assuming seq(X) denotes sequences of type X indexed from 0 onward, a history is an element of History = seq(Event), that is, is a sequence of events. To motivate linearisability in terms of histories, consider the following history of a concurrent stack, where execution starts with an empty stack. Processes p and q are concurrent, and hence, the operation calls may be linearised in either order, that is, both of the following histories are valid linearisations. Assuming execution starts with an empty stack, the abstract stack is b, a (with b at the top) at the end of hs 1 and a, b at the end of hs 2 . Now suppose, history h 1 is extended with a sequential pop operation: a . History h 3 cannot be linearised by any sequential stack history-the only possible stack at the end of h 2 is a , yet the additional events in h 3 are for two pop operations, both of which are successfully able to remove a from the stack. A concurrent stack that generates h 3 would therefore be deemed incorrect. By proving the Treiber stack is linearisable, one can be assured that a history such as h 3 is never generated by the algorithm. We now give some preliminary definitions for linearisability. For h ∈ History, let h | p denote the subsequence of h consisting of all invocation and response events for process p. Two histories h 1 , h 2 are equivalent if for all processes p, An invocation is pending in a history h if and only if there is no matching response to the invocation in h. We say the invocation is complete in h if and only if it is not pending in h. We let complete(h) denote the maximal subsequence of history h consisting of all (completed) invocations and their matching responses in h, that is, the history obtained by removing all pending invocations within h. For a history h, let < h be an irreflexive partial order on operations, where opi < h opj if and only if the response event of opi occurs before the invocation event of opj in h. A history h is sequential if and only if the first element of h is an invocation and each invocation (except possibly the last) is immediately followed by its matching response. We say that h is well formed if and only if the subhistory h| p is sequential. For the rest of this article, we assume the objects in question are executed by a most general client, and hence, that the histories in question are well formed. Definition 2.1 (Linearisability BIB001 . A history hc is linearisable with respect to a sequential history hs if and only if hc can be extended to a history hc by adding zero or more matching responses to pending invocations such that complete(hc ) is equivalent to hs and < hc ⊆ < hs . We simply say hc is linearisable if there exists a history hs such that hc is linearisable with respect to hs. Note that Definition 2.1 allows histories to be extended with matching responses to pending invocations. This is necessary because some pending operation may have executed its linearisation point, but not yet responded. For example, consider the following history, where the stack is initially empty. The linearisation point of push I p (x) has clearly been executed in (1) because pop q returns x, but (1) is incomplete because the push p is still pending. To cope with such scenarios, by the definition of linearisability, (1) may be extended with a matching response to push I p (x), and the extended history mapped to the following sequential history: push We have defined linearisability for concurrent histories. The purpose of linearizability, however, is to define correctness of concurrent objects with respect to some abstract specification. Thus, the definition is lifted to the level of objects as follows. Definition 2.2. A concurrent object is linearisable with respect to a sequential abstract specification if and only if for any legal history hc of the concurrent object, there exists a sequential history hs of the abstract specification such that hc is linearisable with respect to hs.
Verifying Linearisability: A Comparative Survey <s> Linearisability and Observational Refinement <s> Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or linearizability. In this paper, we consider the following fundamental question: What guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Linearisability and Observational Refinement <s> Fine-grained concurrent data structures are crucial for gaining performance from multiprocessing, but their design is a subtle art. Recent literature has made large strides in verifying these data structures, using either atomicity refinement or separation logic with rely-guarantee reasoning. In this paper we show how the ownership discipline of separation logic can be used to enable atomicity refinement, and we develop a new rely-guarantee method that is localized to the definition of a data structure. We present the first semantics of separation logic that is sensitive to atomicity, and show how to control this sensitivity through ownership. The result is a logic that enables compositional reasoning about atomicity and interference, even for programs that use fine-grained synchronization and dynamic memory allocation. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Linearisability and Observational Refinement <s> Verifying program transformations usually requires proving that the resulting program (the target) refines or is equivalent to the original one (the source). However, the refinement relation between individual sequential threads cannot be preserved in general with the presence of parallel compositions, due to instruction reordering and the different granularities of atomic operations at the source and the target. On the other hand, the refinement relation defined based on fully abstract semantics of concurrent programs assumes arbitrary parallel environments, which is too strong and cannot be satisfied by many well-known transformations. In this paper, we propose a Rely-Guarantee-based Simulation (RGSim) to verify concurrent program transformations. The relation is parametrized with constraints of the environments that the source and the target programs may compose with. It considers the interference between threads and their environments, thus is less permissive than relations over sequential programs. It is compositional w.r.t. parallel compositions as long as the constraints are satisfied. Also, RGSim does not require semantics preservation under all environments, and can incorporate the assumptions about environments made by specific program transformations in the form of rely/guarantee conditions. We use RGSim to reason about optimizations and prove atomicity of concurrent objects. We also propose a general garbage collector verification framework based on RGSim, and verify the Boehm et al. concurrent mark-sweep GC. <s> BIB003
A missing link in linearisability theory is the connection between behaviours of objects and clients executing together. Namely, from a programmer's perspective, one may ask: How are the behaviours of a client that uses a sequential object SO related to those of the client when it uses a concurrent object CO instead provided some correctness condition has been established between CO and SO? An answer to this question was given by BIB001 who consider concurrent object systems (which are collections of concurrent objects) and establish a link between linearisability and observational refinement. Their result covers data independent clients, that is, those that communicate only via their object systems, and states that a concurrent object system COS observationally refines a sequential object system AOS if and only if every object in COS is sequentially consistent with respect to its corresponding object in AOS, where -COS observationally refines AOS if and only if for any client program P parameterised by an object system, the observable states 2 of P(COS) is a subset of the observable states of P(AOS), that is, P(AOS) does not generate any new observations in comparison to P(COS); and -COS is sequentially consistent with respect to AOS if and only if for every history h C of COS, there exists a sequential history h A such that the order of operation calls by the same process in h C is preserved in h A . It is well known that linearisability implies sequential consistency, and hence, if COS is linearisable with respect to AOS, then COS also observationally refines AOS for data independent clients. In addition, BIB001 show equivalence between linearisability and observational refinement for clients that share data, that is, that COS observationally refines AOS if and only if COS is linearisable with respect to AOS, where the definition of linearisability is suitably generalised to object systems. Some authors have have presented constructive methods for developing fine-grained objects, dispensing with linearisability as a proof obligation BIB002 BIB003 . Instead, they focus on maintenance of the observable behaviour of the abstract object directly. A survey of techniques for verifying observational refinement lies outside the scope of this article.
Verifying Linearisability: A Comparative Survey <s> 3 <s> When proving that a parallel program has a given property it is often convenient to assume that a statement is indivisible, i.e. that the statement cannot be interleaved with the rest of the program. Here sufficient conditions are obtained to show that the assumption that a statement is indivisible can be relaxed and still preserve properties such as halting. Thus correctness proofs of a parallel system can often be greatly simplified. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> We present a compositional methodology for specification and proof using Interval Temporal Logic (ITL). After given an introduction to ITL, we show how fixpoints of various ITL operators provide a flexible way to modularly reason about safety and liveness. In addition, some new techniques are described for compositionally transforming and refining ITL specifications We also consider the use of ITL's programming language subset Tempura as a tool for testing the kinds of specifications dealt with here. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> Interval Temporal Logic (ITL) is a formalism for reasoning about time periods. To date no one has proved completeness of a relatively simple ITL deductive system supporting infinite time and permitting infinite sequential iteration comparable to ω -regular expressions. We give a complete axiomatization for such a version of quantified ITL over finite domains and can show completeness by representing finite-state automata in ITL and then translating ITL formulas into them. The full paper (and another conference paper) presents the basic framework for finite time. Here and in the full paper the axiom system (and completeness) is extended to infinite time. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a low-level storage model based on a heap with associated lookup, update, allocation and deallocation operations, and unrestricted address arithmetic. The assertion language is based on a possible worlds model of the logic of bunched implications, and includes spatial conjunction and implication connectives alongside those of classical logic. Heap operations are axiomatized using what we call the "small axioms", each of which mentions only those cells accessed by a particular command. Through these and a number of examples we show that the formalism supports local reasoning: A specification and proof can concentrate on only those cells in memory that a program accesses. ::: ::: This paper builds on earlier work by Burstall, Reynolds, Ishtiaq and O'Hearn on reasoning about data structures. <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions. <s> BIB006 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> Lock-free algorithms have been developed to avoid various problems associated with using locks to control access to shared data structures. These algorithms are typically more intricate than lock-based algorithms, as they allow more complex interactions between processes, and many published algorithms have turned out to contain errors. There is thus a pressing need for practical techniques for verifying lock-free algorithms and programs that use them. ::: ::: In this paper we show how Michael and Scott's well known lock-free queue algorithm can be verified using a trace reduction method, based on Lipton's reduction method. Michael and Scott's queue is an interesting case study because, although the basic idea is easy to understand, the actual algorithm is quite subtle, and it demonstrates several way in which the basic reduction method needs to be extended. <s> BIB007 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> Reduction methods developed by Lipton, Lamport, Cohen, and others, allow one to reason about concurrent programs at various levels of atomicity. An action which is considered to be atomic at one level may be implemented by more complex code at the next level. We can show that certain properties of the program are preserved by first showing that the property holds when the expanded code is executed sequentially, and then showing that any execution in which this code is executed concurrently with other processes is equivalent to an execution in which the expanded code is executed without interruption. Existing reduction methods are aimed at traditional approaches to concurrency which prevent interference between concurrent processes using mechanisms such as locks or semaphores. In this paper, we show that these reduction methods can be adapted to reason about nonblocking algorithms, which are designed to operate correctly in the presence of interference, rather than to avoid interference. These algorithms typically use strong synchronisation primitives, such as Load Linked/Store Conditional or Compare and Swap, to detect that interference has occurred and in that case retry their operations. We show that reduction can be used with such algorithms, and illustrate this approach with examples based on shared counters and stacks. <s> BIB008 </s> Verifying Linearisability: A Comparative Survey <s> 3 <s> Linearizability is a commonly accepted notion of correctness for libraries of ::: concurrent algorithms. Unfortunately, it assumes a complete isolation between a ::: library and its client, with interactions limited to passing values of a given ::: data type. This is inappropriate for common programming languages, where ::: libraries and their clients can communicate via the heap, transferring the ::: ownership of data structures, and can even run in a shared address space ::: without any memory protection. In this paper, we present the first definition ::: of linearizability that lifts this limitation and establish an Abstraction ::: Theorem: while proving a property of a client of a concurrent library, we can ::: soundly replace the library by its abstract implementation related to the ::: original one by our generalisation of linearizability. This allows abstracting ::: from the details of the library implementation while reasoning about the ::: client. We also prove that linearizability with ownership transfer can be ::: derived from the classical one if the library does not access some of data ::: structures transferred to it by the client. <s> BIB009
Compositional Frameworks. Compositional frameworks modularise reasoning about a concurrent program by capturing the behaviour of its environment abstractly. For shared-variable concurrency, a popular approach to compositionality is Jones' relyguarantee framework BIB002 ], where a rely condition states assumptions about a component's environment, and a guarantee condition describes the behaviour of a component under the assumption that the rely condition holds. A detailed survey of different compositional verification techniques lies outside the scope of this article; we refer the interested reader to de Roever et al. . Reduction. Reduction enables one to ensure trace equivalence of the fine-grained implementation and its coarse-grained abstraction by verifying commutativity properties BIB001 ]. For example, in a program S1; S2 if S2 performs purely local modifications, (S2 p ; T q ) = (T q ; S2 p ) will hold for any statement T and processes p, q such that p = q. Therefore, S1; S2 in the program code may be treated as atomic{S1; S2}, which in turn enables coarse-grained atomic blocks to be constructed from finer-grained atomic statements in a manner that does not modify the global behaviour of the algorithm. After a reduction-based transformation, the remaining proof only needs to focus on verifying linearisability of the coarse-grained abstraction BIB008 BIB007 , which is simpler than verifying the original program because fewer statements need to be considered. Interval-Based Reasoning. Linearisability is a property over the intervals in which operations execute, requiring a linearisation point to occur at some point between the operation's invocation and response. Some methods exploit interval logics (for example, ITL BIB003 BIB004 ) to simplify reasoning. Here, a program's execution is treated as an interval predicate that defines the evolution of the system over time, as opposed to a relation that defines the individual transitions of the program. Separation Logic. Many linearisable objects are implemented using pointer-based structures such as linked lists. A well-known logic for reasoning about such implementations is separation logic BIB006 BIB005 , which uses a so-called separating conjunction operator to split the memory heap into disjoint partitions, then reasons about each of these individually. Such techniques enable localised reasoning over the part of the heap that is important for the assertions at hand. Of course, pointer-based structures are not the only application of separation logic in linearisability verification; for example, BIB009 use it to split the state spaces of an object and its clients. The methods we discuss in this article all use some combination of the preceding techniques. Prior to exploring these methods in detail, we first review the difficulties encountered when verifying linearisability.
Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> Drawing ideas from previous authors, we present a new non-blocking concurrent queue algorithm and a new two-lock queue algorithm in which one enqueue and one dequeue can proceed concurrently. Both algorithms are simple, fast, and practical; we were surprised not to find them in the literature. Experiments on a 12-node SGI Challenge multiprocessor indicate that the new non-blocking queue consistently outperforms the best known alternatives; it is the clear algorithm of choice for machines that provide a universal atomic primitive (e.g., compare_and_swap or load_linked/store_conditional). The two-lock concurrent queue outperforms a single lock when several processes are competing simultaneously for access; it appears to be the algorithm of choice for busy queues on machines with non-universal atomic primitives (e.g., test_and_set). Since much of the motivation for non-blocking algorithms is rooted in their immunity to large, unpredictable delays in process execution, we report experimental results both for systems with dedicated processors and for systems with several processes multiprogrammed on each processor. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> Lock-free (non-blocking) shared data structures promise more robust performance and reliability than conventional lock-based implementations. However, all prior lock-free algorithms for sets and hash tables suffer from serious drawbacks that prevent or limit their use in practice. These drawbacks include size inflexibility, dependence on atomic primitives not supported on any current processor architecture, and dependence on highly-inefficient or blocking memory management techniques.Building on the results of prior researchers, this paper presents the first CAS-based lock-free list-based set algorithm that is compatible with all lock-free memory management methods. We use it as a building block of an algorithm for lock-free hash tables. In addition to being lock-free, the new algorithm is dynamic, linearizable, and space-efficient.Our experimental results show that the new algorithm outperforms the best known lock-free as well as lock-based hash table implementations by significant margins, and indicate that it is the algorithm of choice for implementing shared hash tables. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> We describe a semi-automated verification of a slightly optimised version of Michael and Scott’s lock-free FIFO queue implementation. We verify the algorithm with a simulation proof consisting of two stages: a forward simulation from an automaton modelling the algorithm to an intermediate automaton, and a backward simulation from the intermediate automaton to an automaton that models the behaviour of a FIFO queue. These automata are encoded in the input language of the PVS proof system, and the properties needed to show that the algorithm implements the specification are proved using PVS’s theorem prover. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked). <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms. <s> BIB006 </s> Verifying Linearisability: A Comparative Survey <s> Difficulties in Verifying Linearisability <s> Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof. <s> BIB007
One may classify different types of algorithms based on their linearisation points (see Table I 4 ). The type of linearisation point may be distinguished as being fixed (i.e,. the linearisation point may be predetermined), external (i.e., the execution of a different operation potentially determines the linearisation point), and future-dependent (i.e., the linearisation point is determined by the future executions of the operation and in addition, these linearisations modify an object's abstract representation). Different operations of the same object may have different types of linearisation points. In fact, even within an operation, there are different types of linearisation points depending on the value returned. For example, the dequeue operation of the Michael-Scott queue BIB002 has both external (empty case) and fixed (nonempty case) linearisation points. An example of an algorithm with fixed (or static) linearisation points is the Treiber stack ]. Note that these linearisation points can be conditional on the global state. For example, in the pop operation of the Treiber stack, the statement labelled P2 is a linearisation point for the empty case if Head = null holds when P2 is executed-at this point, if Head = null holds, one can be guaranteed that the pop operation will return empty and in addition that the corresponding abstract stack is empty. Proving correctness of such algorithms is relatively straightforward because reasoning may be performed in a forward manner. In particular, for each atomic statement of the operation, one can predetermine whether or not the statement is a linearisation point and generate proof obligations accordingly. In some cases, reasoning can even be automated BIB006 ]. An operation that has external linearisation points is the contains operation of the lazy set by BIB005 . The contains operation executing in isolation must set its own linearisation points, but interference from other processes may cause it to be linearised externally. Further details of this operation are given in Section 5.1. An example of the third class of algorithm is the queue by BIB001 , where each concrete state corresponds to a set of abstract queue representations determined by the shared array and the states of all operations operating on the array. Reasoning here must be able to state properties of the form: "If in the future, the algorithm has some behaviour, then the current statement of the algorithm is a linearisation point." Further complications arise when states of concrete system potentially corresponds to several possible states of the abstract data type. Hence, for each step of the concrete, one must check that each potential abstract data type is modified appropriately. Table II presents a summary of methods for verifying linearisability, together with the algorithms that have been verified with each method and references to the papers in which the verifications are explained. Table III then presents further details of each method. The first column details whether algorithms with fixed and external linearisation points have been proved, and the second details whether algorithms with future linearisation points have been proved. The third column details the associated tool (if one exists), the fourth details whether the method uses a compositional approach, and the fifth details whether each method is known to be complete. The final column details BIB007 1 This is the only method known to have found two bugs in existing algorithms . 2 Including a variation by BIB004 . 3 The use of atomicity brackets prohibits behaviours that are permitted by the fine-grained algorithm. 4 Set algorithm by BIB003 , which is based on the algorithm by . whether the methods have been linked formally to Herlihy and Wing's definitions of linearisability.
Verifying Linearisability: A Comparative Survey <s> Simulation-Based Verification <s> Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Simulation-Based Verification <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Simulation-Based Verification <s> We describe a semi-automated verification of a slightly optimised version of Michael and Scott’s lock-free FIFO queue implementation. We verify the algorithm with a simulation proof consisting of two stages: a forward simulation from an automaton modelling the algorithm to an intermediate automaton, and a backward simulation from the intermediate automaton to an automaton that models the behaviour of a FIFO queue. These automata are encoded in the input language of the PVS proof system, and the properties needed to show that the algorithm implements the specification are proved using PVS’s theorem prover. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Simulation-Based Verification <s> Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Simulation-Based Verification <s> Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique. <s> BIB005
The first formal proofs of linearisability BIB003 ] used simulation in the framework of Input/Output Automata . Verification proceeds with respect to canonical constructions ], where each operation call consists of an invocation, a single atomic transition that performs the abstract operation, and a return transition. The operations of a canonical object may be interleaved meaning its histories are concurrent, but the main transition is performed in a single atomic step. Lynch (1) Forward and backward simulation is complete for showing refinement of input/output automata . (2) Backward simulation for history-enhanced data types shown to be complete for linearisability BIB005 . (3) Completeness could potentially be proved by linking these methods to the results of BIB001 ; however, this link has thus far not been made. (4) Using results of . (5) Using results in Feng [2013a, 2013b] . (6) Using an alternative characterisation of linearisability based on possibilities BIB002 . (7) Applies purely blocking implementations only. [1996] has shown that the history of every canonical construction is linearisable, and hence, any implementation that refines can be guaranteed to be linearisable. To demonstrate this technique, consider the concrete trace from Figure 5 , recalling that the successful CAS statements at H6 and P7 are linearisation points for the push and nonempty pop operations, respectively. One obtains the mapping between the concrete and canonical traces shown in Figure 6 . Namely, each invocation (response) transition of the concrete maps to an invocation (response) of the abstract, while a linearising transition maps to a main transition marked in Figure 6 by a cross. The other concrete transitions are stuttering steps (see Figure 5 ), and hence, have no effect on the corresponding canonical state. Although Groves et al. present a sound method for proving linearisability, a fundamental question about the link between concurrent and sequential programs remains. BIB004 , who present a simulation-based method linking the concurrent object in question with its sequential (as opposed to canonical) abstraction. This is achieved by including an auxiliary history variable in the states of both the concrete and abstract objects so that linearisability is established as part of the refinement. In addition, a number of process-local proof obligations that dispense with histories are generated, whose satisfaction implies linearisability. Instead of proving refinement in a layered manner (as done by Groves et al.) , Derrick et al.' s proofs aim to capture the relationships between the abstract and concrete systems within the refinement relation itself. For a concrete example, once again consider the stack trace from Figure 5 . Using the methods of BIB004 , one would obtain a refinement shown in Figure 7 , where the concrete transitions that update the history are indicated with a bold arrow. Assume hc and ha are the concrete and abstract history variables, both of which are sequences of events. Each concrete invoke or return transition appends the corresponding event to the end of hc, for example, transition push . Therefore, the concrete history hc may be concurrent, whereas the abstract history ha is sequential. This enables the proof of linearisability to be built into the refinement relation, as opposed to relying on a canonical abstraction that generates linearisable histories.
Verifying Linearisability: A Comparative Survey <s> Augmented States <s> A new approach to data flow analysis of procedural programs and programs with recursive data structures is described. The method depends on simulation of the interpreter for the subject programming language using a retrieval function to approximate a program's data structures. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a low-level storage model based on a heap with associated lookup, update, allocation and deallocation operations, and unrestricted address arithmetic. The assertion language is based on a possible worlds model of the logic of bunched implications, and includes spatial conjunction and implication connectives alongside those of classical logic. Heap operations are axiomatized using what we call the "small axioms", each of which mentions only those cells accessed by a particular command. Through these and a number of examples we show that the formalism supports local reasoning: A specification and proof can concentrate on only those cells in memory that a program accesses. ::: ::: This paper builds on earlier work by Burstall, Reynolds, Ishtiaq and O'Hearn on reasoning about data structures. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> We study a family of implementations for linked lists using fine-grain synchronisation. This approach enables greater concurrency, but correctness is a greater challenge than for classical, coarse-grain synchronisation. Our examples are demonstrative of common design patterns such as lock coupling, optimistic, and lazy synchronisation. Although they are are highly concurrent, we prove that they are linearisable, safe, and they correctly implement a high-level abstraction. Our proofs illustrate the power and applicability of rely-guarantee reasoning, as well of some of its limitations. The examples of the paper establish a benchmark challenge for other reasoning techniques. <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> Linearizability is one of the main correctness criteria for implementations of concurrent data structures. A data structure is linearizable if its operations appear to execute atomically. Verifying linearizability of concurrent unbounded linked data structures is a challenging problem because it requires correlating executions that manipulate (unbounded-size) memory states. We present a static analysis for verifying linearizability of concurrent unbounded linked data structures. The novel aspect of our approach is the ability to prove that two (unboundedsize) memory layouts of two programs are isomorphic in the presence of abstraction. A prototype implementation of the analysis verified the linearizability of several published concurrent data structures implemented by singly-linked lists. <s> BIB006 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> This paper presents a novel abstraction for heap-allocated data structures that keeps track of both their shape and their contents. By combining this abstraction with thread-local analysis and rely-guarantee reasoning, we can verify a collection of fine-grained blocking and non-blocking concurrent algorithms for an arbitrary (unbounded) number of threads. We prove that these algorithms are linearizable, namely equivalent (modulo termination) to their sequential counterparts. <s> BIB007 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms. <s> BIB008 </s> Verifying Linearisability: A Comparative Survey <s> Augmented States <s> Locating linearization points (LPs) is an intuitive approach for proving linearizability, but it is difficult to apply the idea in Hoare-style logic for formal program verification, especially for verifying algorithms whose LPs cannot be statically located in the code. In this paper, we propose a program logic with a lightweight instrumentation mechanism which can verify algorithms with non-fixed LPs, including the most challenging ones that use the helping mechanism to achieve lock-freedom (as in HSY elimination-based stack), or have LPs depending on unpredictable future executions (as in the lazy set algorithm), or involve both features. We also develop a thread-local simulation as the meta-theory of our logic, and show it implies contextual refinement, which is equivalent to linearizability. Using our logic we have successfully verified various classic algorithms, some of which are used in the java.util.concurrent package. <s> BIB009
Instead of defining concrete and abstract objects as separate systems and using a representation relation to link their behaviours (as done in Section 3.3), one may embed the abstract system directly within the concrete system as an auxiliary extension ] and prove linearisability by reasoning about this combined system. For example, in a proof of the Treiber stack, one would introduce the abstract sequence S as an auxiliary variable to the program in Section 1. At each linearisation point of the Treiber stack, a corresponding operation is performed on S, for example, the successful CAS transition at H6 is augmented so that S is updated to v S ]. This has the advantage of flattening the state space into a single layer meaning proofs of linearisability follow from invariants on the combined state. further simplifies proofs by using a framework that combines separation logic BIB004 (to reason about pointers) and rely-guarantee BIB002 ] (to support compositionality). It is worth noting, however, the underlying theory using this method relies on refinement BIB009 . Namely, the augmentation of each concrete state must be an appropriate abstraction of the concrete object in question. To visualise this approach, again consider the example trace from Figure 4 , where embedding the abstract state as an auxiliary variable produces the augmented trace in Figure 8 . For algorithms with fixed linearisation points (which can be verified using forward simulation), reasoning about invariants over the flattened state space is simpler than simulation proofs. (This is also observed in the forward simulation proof of , where auxiliary variables that encode the abstract state are introduced at the concrete level.) However, invariant-based proofs only allow reasoning about a single state at a time, and hence are less flexible than refinement relations, which relate a concrete state to potentially many abstract states. addresses these shortcomings by using more sophisticated auxiliary statements that are able to linearise both the currently executing operation as well as other executing processes. In addition, prophecy variables BIB003 are used to reason about operations whose linearisation points depend on future behaviour. Recently, BIB009 have consolidated these ideas augmentations by allowing auxiliary statements linself (which performs the same function as the augmentations of Vafeiadis by linearising the currently executing process BIB005 ) and lin(p) (which performs the linearisation of process p different from sel f that may be executing a different operation). Liang and Feng (unlike Vafeiadis) allow augmentations that use try and commit pairs, where the try is used to guess potential linearisation points, and the commit is used to pick from the linearisation points that have been guessed thus far. Augmented state spaces also form the basis for shape analysis BIB001 , which is a static analysis technique for verifying properties of objects with dynamically allocated memory. One of the first shape-analysis-based linearisability proofs is that of BIB006 , who consider implementations using singly linked lists and fixed linearisation points. The following paraphrases [Amit et al. 2007, p. 480] , by clarifying their nomenclature with the terminology used in this article. The proof method uses a correlating semantics, which simultaneously manipulates two memory states: a so-called candidate state (i.e., concrete state) and the reference state (i.e., abstract state). The candidate state is manipulated according to an interleaved execution and whenever a process reaches a linearisation point in a given procedure, the correlating semantics invokes the same procedure with the same arguments on the reference state. The interleaved execution is not allowed to proceed until the execution over the reference state terminates. The reference response (i.e., return value) is saved, and compared to the response of the corresponding candidate operation when it terminates. Thus linearisability of an interleaved execution is verified by constructing a (serial) witness execution for every interleaved execution. These methods are extended by BIB007 , where a distinction is made between shape abstraction (describing the structure of a concurrent object) and value abstraction (describing the values contained within the object). The method is used to verify several algorithms, including the complex RDCSS algorithm with future linearisation points. Although the behaviours of concurrent objects are complex, the algorithms that implement them are often short, consisting of only a few lines of code. This makes it feasible to perform a brute-force search for their linearisation points. To this end, BIB008 presents a fully automated method that infers the required abstraction mappings based on the given program and abstract specification of the objects. The method is, thus far, only able to handle so-called logically pure operations. An example of a logically impure operation is the remove operation of the optimistic set (Section 4.1), which uses a special "marked bit" to denote nodes that have been logically removed from the set.
Verifying Linearisability: A Comparative Survey <s> Interval-Based Methods <s> In order to keep the tasks of specification, programming and verification in manageable orders of magnitude, a system for formM development should support the structuring of the development process. This process starts with a horizontally structured (top-level) specification. While it is generMly agreed that a formal specification has a significant value in itself, it is by no means a guarantee that the development process will end up with an implemented software system, let alone a correct one. A system for formal development must therefore also support the implementation process using a hierarchy of increasingly concrete intermediate specifications. Refinement steps may contain pieces of code of some suitable programmung language. The notion of correctness (of refinement steps) must be complemented by a program logic powerful enough to express the necessary proof obligations and by theorem proving support to actually prove these assertions. In many aspects the techniques of "classical" theorem proving are not suitable for the deduction tasks that accompany the development process. The approach that has proven successful in this area is Tactical Theorem Proving, where a proof calculus is embedded into a (usually functional) recta-language. Proof search is then implemented by programs in this recta-language. Usually, a sequent calculus or Natural Deduction is used in such systems. The availability of an entire programming language, rather than a mere set of axioms and rules, facilitates the sound extension of the basic logic, and in fact the construction of a complete derived calculus. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Interval-Based Methods <s> Interval Temporal Logic (ITL) is a formalism for reasoning about time periods. To date no one has proved completeness of a relatively simple ITL deductive system supporting infinite time and permitting infinite sequential iteration comparable to ω -regular expressions. We give a complete axiomatization for such a version of quantified ITL over finite domains and can show completeness by representing finite-state automata in ITL and then translating ITL formulas into them. The full paper (and another conference paper) presents the basic framework for finite time. Here and in the full paper the axiom system (and completeness) is extended to infinite time. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Interval-Based Methods <s> List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked). <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Interval-Based Methods <s> Linearizability is a global correctness criterion for concurrent systems. One technique to prove linearizability is applying a composition theorem which reduces the proof of a property of the overall system to sufficient rely-guarantee conditions for single processes. In this paper, we describe how the temporal logic framework implemented in the KIV interactive theorem prover can be used to model concurrent systems and to prove such a composition theorem. Finally, we show how this generic theorem can be instantiated to prove linearizability of two classic lock-free implementations: a Treiber-like stack and a slightly improved version of Michael and Scott’s queue. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Interval-Based Methods <s> Linearisability has become the standard correctness criterion for concurrent data structures, ensuring that every history of invocations and responses of concurrent operations has a matching sequential history. Existing proofs of linearisability require one to identify so-called linearisation points within the operations under consideration, which are atomic statements whose execution causes the effect of an operation to be felt. However, identification of linearisation points is a non-trivial task, requiring a high degree of expertise. For sophisticated algorithms such as Heller et al's lazy set, it even is possible for an operation to be linearised by the concurrent execution of a statement outside the operation being verified. This paper proposes an alternative method for verifying linearisability that does not require identification of linearisation points. Instead, using an interval-based logic, we show that every behaviour of each concrete operation over any interval is a possible behaviour of a corresponding abstraction that executes with coarse-grained atomicity. This approach is applied to Heller et al's lazy set to show that verification of linearisability is possible without having to consider linearisation points within the program code. <s> BIB005
Interval-based methods aim to treat programs as executing over an interval of time, as opposed to relations between pre and post states. Schellhorn et al. combine relyguarantee reasoning with interval temporal logic BIB002 ], which enables one to reason over the interval of time in which a program executes, as opposed to single state transition . The proofs are carried out using the KIV theorem prover BIB001 , which is combined with symbolic execution ] to enable guarantee conditions to be checked. This involves inductively stepping through the program statements within KIV itself, simplifying verification. These methods have been applied to verify the Treiber stack and the Michael-Scott queue BIB004 . BIB005 verify behaviour refinement between a coarse-grained abstraction and fine-grained implementation. Unlike all other methods, these proofs do not rely on identification of linearisation points in the concrete code. The method has been applied to the lazy set algorithm BIB003 , including the contains operation with external linearisation points.
Verifying Linearisability: A Comparative Survey <s> Problem-Specific Techniques <s> The authors extend linearizability, a consistency criterion for concurrent systems, to the replicated context, where availability and performance are enhanced by using redundant objects. The mode of operation on sets of replicas and the consistency criterion of R-linearizability are defined. An implementation of R-linearizable replicated atoms (on which only read and write operations are defined) is described. It is realized in the virtually synchronous model, based on a group view mechanism. This framework provides reliable multicast primitives, enabling a fault-tolerant implementation. > <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Problem-Specific Techniques <s> List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked). <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Problem-Specific Techniques <s> We present a proof of safety and linearizability of a highly-concurrent optimistic set algorithm. The key step in our proof is the Hindsight Lemma, which allows a thread to infer the existence of a global state in which its operation can be linearized based on limited local atomic observations about the shared state. The Hindsight Lemma allows us to avoid one of the most complex and non-intuitive steps in reasoning about highly concurrent algorithms: considering the linearization point of an operation to be in a different thread than the one executing it. The Hindsight Lemma assumes that the algorithm maintains certain simple invariants which are resilient to interference, and which can themselves be verified using purely thread-local proofs. As a consequence, the lemma allows us to unlock a perhaps-surprising intuition: a high degree of interference makes non-trivial highly-concurrent algorithms in some cases much easier to verify than less concurrent ones. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Problem-Specific Techniques <s> Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Problem-Specific Techniques <s> An execution containing operations performing queries or updating a concurrent object is linearizable w.r.t an abstract implementation (called specification) iff for each operation, one can associate a point in time, called linearization point, such that the execution of the operations in the order of their linearization points can be reproduced by the specification. Finding linearization points is particularly difficult when they do not belong to the operations's actions. This paper addresses this challenge by introducing a new technique for rewriting the implementation of the concurrent object and its specification such that the new implementation preserves all executions of the original one, and its linearizability (w.r.t. the new specification) implies the linearizability of the original implementation (w.r.t. the original specification). The rewriting introduces additional combined methods to obtain a library with a simpler linearizability proof, i.e., a library whose operations contain their linearization points. We have implemented this technique in a prototype, which has been successfully applied to examples beyond the reach of current techniques, e.g., Stack Elimination and Fetch&Add. <s> BIB005
Researchers have also developed problem-specific methods, sacrificing generality in favour of simpler linearisability proofs for a specific subset of concurrent objects. One such method for nonblocking algorithms is the Hindsight Lemma BIB003 , which applies to linked list implementations of concurrent sets (e.g., the lazy set) and characterises conditions under which a node is guaranteed to have been in or out of a set. The original paper BIB003 ] only considers a simple optimistic set. The extended technical report BIB003 ] presents a proof of the Heller et al.lazy set. Unfortunately, the locks within the add and remove operations are modelled using atomicity brackets, which has the unwanted side effect of disallowing concurrent reads of the locked nodes. Thus, the algorithm verified by O'Hearn et al. [2010b] differs operationally from the Heller et al. lazy set BIB002 ]. Overall, the ideas behind problem-specific simplifications such as the Hindsight Lemma are interesting, but the logic used and the objects considered are highly specialised. Some objects like queues and stacks can be uniquely identified by their aspects, which are properties that uniquely characterise the object in question. This is exploited by BIB004 , who present an aspect-oriented proof of the Herlihy-Wing queue. Further details of this particular method are provided in Section 6.2. Automation has been achieved for algorithms with helping mechanisms and external linearisation points such as the elimination stack BIB005 . These techniques require the algorithms to satisfy so-called R-linearisability BIB001 , a stronger condition than linearisability, hence, verification of algorithms with linearisation points based on future behaviour are excluded.
Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> When proving that a parallel program has a given property it is often convenient to assume that a statement is indivisible, i.e. that the statement cannot be interleaved with the rest of the program. Here sufficient conditions are obtained to show that the assumption that a statement is indivisible can be relaxed and still preserve properties such as halting. Thus correctness proofs of a parallel system can often be greatly simplified. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> The Specification Statement.- 1 Introduction.- 2 Specification statements.- 3 The implementation ordering.- 4 Suitability of the definitions.- 5 Using specification statements.- 6 Miracles.- 7 Guarded commands are miracles.- 8 Positive applications of miracles.- 9 Conclusion.- 10 Acknowledgements.- Specification Statements and Refinement.- 1 Introduction.- 2 The refinement theorems.- 3 The refinement calculus.- 4 An example: square root.- 5 Derivation of laws.- 6 Conclusion.- 7 Acknowledgements.- Procedures, Parameters, and Abstraction: Separate Concerns.- 1 Introduction.- 2 Procedure call.- 3 Procedural abstraction.- 4 Parameters.- 5 Conclusion.- 6 Acknowledgements.- Data Refinement by Miracles.- 1 Introduction.- 2 An abstract program.- 3 A difficult data refinement.- 4 Miraculous programs.- 5 Eliminating miracles.- 6 Conclusion.- 7 Acknowledgements.- Auxiliary Variables in Data Refinement.- 1 Introduction.- 2 The direct technique.- 3 The auxiliary variable technique.- 4 The correspondence.- 5 Conclusion.- 6 Acknowledgements.- Data Refinement of Predicate Transformers.- 1 Introduction.- 2 Predicate transformers.- 3 Algorithmic refinement of predicate transformers.- 4 Data refinement of predicate transformers.- 5 The programming language.- 6 Distribution of data refinement.- 7 Data refinement of specifications.- 8 Data refinement in practice.- 9 Conclusions.- 10 Acknowledgements.- Data Refinement by Calculation.- 1 Introduction.- 2 Refinement.- 3 Language extensions.- 4 Data refinement calculators.- 5 Example of refinement: the "mean" module.- 6 Specialized techniques.- 7 Conclusions.- 8 Acknowledgements.- 9 Appendix: refinement laws.- A Single Complete Rule for Data Refinement.- 1 Introduction.- 2 Data refinement.- 3 Predicate transformers.- 4 Completeness.- 5 Soundness.- 6 Partial programs.- 7 An example.- 8 Conclusion.- 9 Acknowledgements.- Types and Invariants in the Refinement Calculus.- 1 Introduction.- 2 Invariant semantics.- 3 The refinement calculus.- 4 A development method.- 5 Laws for local invariants.- 6 Eliminating local invariants.- 7 Type-checking.- 8 Recursion.- 9 Examples.- 10 A discussion of motives.- 11 Related work.- 12 Conclusions.- Acknowledgements.- A Additional refinement laws.- References.- Authors' Addresses. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Drawing ideas from previous authors, we present a new non-blocking concurrent queue algorithm and a new two-lock queue algorithm in which one enqueue and one dequeue can proceed concurrently. Both algorithms are simple, fast, and practical; we were surprised not to find them in the literature. Experiments on a 12-node SGI Challenge multiprocessor indicate that the new non-blocking queue consistently outperforms the best known alternatives; it is the clear algorithm of choice for machines that provide a universal atomic primitive (e.g., compare_and_swap or load_linked/store_conditional). The two-lock concurrent queue outperforms a single lock when several processes are competing simultaneously for access; it appears to be the algorithm of choice for busy queues on machines with non-universal atomic primitives (e.g., test_and_set). Since much of the motivation for non-blocking algorithms is rooted in their immunity to large, unpredictable delays in process execution, we report experimental results both for systems with dedicated processors and for systems with several processes multiprogrammed on each processor. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> We present an efficient lock-free algorithm for parallel accessible hash tables with open addressing, which promises more robust performance and reliability than conventional lock-based implementations. ``Lock-free'' means that it is guaranteed that always at least one process completes its operation within a bounded number of steps. For a single processor architecture our solution is as efficient as sequential hash tables. On a multiprocessor architecture this is also the case when all processors have comparable speeds. The algorithm allows processors that have widely different speeds or come to a halt. It can easily be implemented using C-like languages and requires on average only constant time for insertion, deletion or accessing of elements. The algorithm allows the hash tables to grow and shrink when needed. ::: Lock-free algorithms are hard to design correctly, even when apparently straightforward. Ensuring the correctness of the design at the earliest possible stage is a major challenge in any responsible system development. In view of the complexity of the algorithm, we turned to the interactive theorem prover PVS for mechanical support. We employ standard deductive verification techniques to prove around 200 invariance properties of our algorithm, and describe how this is achieved with the theorem prover PVS. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked). <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Lock-free algorithms have been developed to avoid various problems associated with using locks to control access to shared data structures. Instead of preventing interference between processes using mutual exclusion, lock-free algorithms must ensure correct behaviour in the presence of interference. While this avoids the problems with locks, the resulting algorithms are typically more intricate than lock-based algorithms, and allow more complex interactions between processes. The result is that even when the basic idea is easy to understand, the code implementing lock-free algorithms is typically very subtle, hard to understand, and hard to get right. In this paper, we consider the well-known lock-free queue implementation due to Michael and Scott, and show how a slightly simplified version of this algorithm can be derived from an abstract specification via a series of verifiable refinement steps. Reconstructing a design history in this way allows us to examine the kinds of design decisions that underlie the algorithm as describe by Michael and Scott, and to explore the consequences of some alternative design choices. Our derivation is based on a refinement calculus with concurrent composition, combined with a reduction approach, based on that proposed by Lipton, Lamport, Cohen, and others, which we have previously used to derive a scalable stack algorithm. The derivation of Michael and Scott's queue algorithm introduces some additional challenges because it uses a ''helper'' mechanism which means that part of an enqueue operation can be performed by any process, also in a simulation proof the treatment of dequeue on an empty queue requires the use of backward simulation. <s> BIB006 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> We show how a sophisticated, lock-free concurrent stack implementation can be derived from an abstract specification in a series of verifiable steps. The algorithm is based on the scalable stack algorithm of Hendler et al. (Proceedings of the sixteenth annual ACM symposium on parallel algorithms, 27–30 June 2004, Barcelona, Spain, pp 206–215), which allows push and pop operations to be paired off and eliminated without affecting the central stack, thus reducing contention on the stack, and allowing multiple pairs of push and pop operations to be performed in parallel. Our algorithm uses a simpler data structure than Hendler, Shavit and Yerushalmi’s, and avoids an ABA problem. We first derive a simple lock-free stack algorithm using a linked-list implementation, and discuss issues related to memory management and the ABA problem. We then add an abstract model of the elimination process, from which we derive our elimination algorithm. This allows the basic algorithmic ideas to be separated from implementation details, and provides a basis for explaining and comparing different variants of the algorithm. We show that the elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack. Each step in the derivation is either a data refinement which preserves the level of atomicity, an operational refinement which may alter the level of atomicity, or a refactoring step which alters the structure of the system resulting from the preceding derivation. We verify our refinements using an extension of Lipton’s reduction method, allowing concurrent and non-concurrent aspects to be considered separately. <s> BIB007 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Practical and efficient algorithms for concurrent data structures are difficult to construct and modify. Algorithms in the literature are often optimized for a specific setting, making it hard to separate the algorithmic insights from implementation details. The goal of this work is to systematically construct algorithms for a concurrent data structure starting from its sequential implementation. Towards that goal, we follow a construction process that combines manual steps corresponding to high-level insights with automatic exploration of implementation details. To assist us in this process, we built a new tool called Paraglider. The tool quickly explores large spaces of algorithms and uses bounded model checking to check linearizability of algorithms. Starting from a sequential implementation and assisted by the tool, we present the steps that we used to derive various highly-concurrent algorithms. Among these algorithms is a new fine-grained set data structure that provides a wait-free contains operation, and uses only the compare-and-swap (CAS) primitive for synchronization. <s> BIB008 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> We present a proof calculus and method for the static verification of assertions and procedure specifications in shared-memory concurrent programs. The key idea in our approach is to use atomicity as a proof tool and to simplify the verification of assertions by rewriting programs to consist of larger atomic actions. We propose a novel, iterative proof style in which alternating use of abstraction and reduction is exploited to compute larger atomic code blocks in a sound manner. This makes possible the verification of assertions in the transformed program by simple sequential reasoning within atomic blocks, or significantly simplified application of existing concurrent program verification techniques such as the Owicki-Gries or rely-guarantee methods. Our method facilitates a clean separation of concerns where at each phase of the proof, the user worries only about only either the sequential properties or the concurrency control mechanisms in the program. We implemented our method in a tool called QED. We demonstrate the simplicity and effectiveness of our approach on a number of benchmarks including ones with intricate concurrency protocols. <s> BIB009 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> On shared memory multiprocessors, synchronization often turns out to be a performance bottleneck and the source of poor fault-tolerance. By avoiding locks, the significant benefit of lock (or wait)-freedom for real-time systems is that the potentials for deadlock and priority inversion are avoided. The lock-free algorithms often require the use of special atomic processor primitives such as CAS (Compare And Swap) or LL /SC (Load Linked/Store Conditional). However, many machine architectures support either CAS or LL /SC , but not both. In this paper, we present a lock-free implementation of the ideal semantics of LL /SC using only pointer-size CAS , and show how to use refinement mapping to prove the correctness of the algorithm. <s> BIB010 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org. <s> BIB011 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> The literature describes two high performance concurrent stack algorithms based on combining funnels and elimination trees. Unfortunately, the funnels are linearizable but blocking, and the elimination trees are non-blocking but not linearizable. Neither is used in practice since they perform well only at exceptionally high loads. The literature also describes a simple lock-free linearizable stack algorithm that works at low loads but does not scale as the load increases. The question of designing a stack algorithm that is non-blocking, linearizable, and scales well throughout the concurrency range, has thus remained open. This paper presents such a concurrent stack algorithm. It is based on the following simple observation: that a single elimination array used as a backoff scheme for a simple lock-free stack is lock-free, linearizable, and scalable. As our empirical results show, the resulting elimination-backoff stack performs as well as the simple stack at low loads, and increasingly outperforms all other methods (lock-based and non-blocking) as concurrency increases. We believe its simplicity and scalability make it a viable practical alternative to existing constructions for implementing concurrent stacks. <s> BIB012 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms. <s> BIB013 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> We present a proof of safety and linearizability of a highly-concurrent optimistic set algorithm. The key step in our proof is the Hindsight Lemma, which allows a thread to infer the existence of a global state in which its operation can be linearized based on limited local atomic observations about the shared state. The Hindsight Lemma allows us to avoid one of the most complex and non-intuitive steps in reasoning about highly concurrent algorithms: considering the linearization point of an operation to be in a different thread than the one executing it. The Hindsight Lemma assumes that the algorithm maintains certain simple invariants which are resilient to interference, and which can themselves be verified using purely thread-local proofs. As a consequence, the lemma allows us to unlock a perhaps-surprising intuition: a high degree of interference makes non-trivial highly-concurrent algorithms in some cases much easier to verify than less concurrent ones. <s> BIB014 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> We present a proof system and supporting tool, QED, for the static verification of concurrent software. Our key idea is to simplify the verification of a program by rewriting it with larger atomic actions. We demonstrated the simplicity and effectiveness of our approach on benchmarks with intricate synchronization. <s> BIB015 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV. <s> BIB016 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Stepwise refinement is a method for systematically transforming a high-level program into an efficiently executable one. A sequence of successively refined programs can also serve as a correctness proof, which makes different mechanisms in the program explicit. We present rules for refinement of multi-threaded shared-variable concurrent programs. We apply our rules to the problem of verifying linearizability of concurrent objects, that are accessed by an unbounded number of concurrent threads. Linearizability is an established correctness criterion for concurrent objects, which states that the effect of each method execution can be considered to occur atomically at some point in time between its invocation and response. We show how linearizability can be expressed in terms of our refinement relation, and present rules for establishing this refinement relation between programs by a sequence of local transformations of method bodies. Contributions include strengthenings of previous techniques for atomicity refinement, as well as an absorption rule, which is particularly suitable for reasoning about concurrent algorithms that implement atomic operations. We illustrate the application of the refinement rules by proving linearizability of Treiber’s concurrent stack algorithm and Michael and Scott’s concurrent queue algorithm. <s> BIB017 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Locating linearization points (LPs) is an intuitive approach for proving linearizability, but it is difficult to apply the idea in Hoare-style logic for formal program verification, especially for verifying algorithms whose LPs cannot be statically located in the code. In this paper, we propose a program logic with a lightweight instrumentation mechanism which can verify algorithms with non-fixed LPs, including the most challenging ones that use the helping mechanism to achieve lock-freedom (as in HSY elimination-based stack), or have LPs depending on unpredictable future executions (as in the lazy set algorithm), or involve both features. We also develop a thread-local simulation as the meta-theory of our logic, and show it implies contextual refinement, which is equivalent to linearizability. Using our logic we have successfully verified various classic algorithms, some of which are used in the java.util.concurrent package. <s> BIB018 </s> Verifying Linearisability: A Comparative Survey <s> Construction-Based Proofs <s> Linearisability has become the standard correctness criterion for concurrent data structures, ensuring that every history of invocations and responses of concurrent operations has a matching sequential history. Existing proofs of linearisability require one to identify so-called linearisation points within the operations under consideration, which are atomic statements whose execution causes the effect of an operation to be felt. However, identification of linearisation points is a non-trivial task, requiring a high degree of expertise. For sophisticated algorithms such as Heller et al's lazy set, it even is possible for an operation to be linearised by the concurrent execution of a statement outside the operation being verified. This paper proposes an alternative method for verifying linearisability that does not require identification of linearisation points. Instead, using an interval-based logic, we show that every behaviour of each concrete operation over any interval is a possible behaviour of a corresponding abstraction that executes with coarse-grained atomicity. This approach is applied to Heller et al's lazy set to show that verification of linearisability is possible without having to consider linearisation points within the program code. <s> BIB019
Several researchers have also proposed the development of linearisable algorithms via incremental refinement, starting with an abstract specification. Due to the transitivity of refinement, and because the operations of the initial program are atomic (and trivially linearisable), linearisability of the final program is also guaranteed. An advantage of this approach is the ability to design an implementation algorithm, leaving open the possibility of developing variations of the desired algorithm. The first constructive approach to linearisability is by , who use the Event-B framework BIB011 ] and the associated proof tool. However, the final algorithm they obtain requires counters on the nodes (as opposed to pointers BIB003 ), thus it is not clear whether such a scheme really is implementable. BIB006 presents a derivation of the Michael-Scott queue using reduction to justify each refinement step BIB001 ]. This is extended by BIB007 , who derive a more complicated stack by BIB012 that uses an additional "backoff array" in the presence of high contention for the shared central stack. Their derivation methods allow data refinement (without changing atomicity), operation refinement (where atomicity is modified, but state spaces remain the same), Manual Allows model checking Manual Auxiliary code can linearise other operations BIB013 Automatic Full automation via shape analysis, but the lazy set BIB005 is not yet verified in the method. BIB014 N/A Uses Hindsight Lemma to generate proof obligations, and hence, only applicable to list-based set implementations BIB009 N/A Linearisability proofs are performed for coarse-grained abstractions BIB016 Manual Data refinement proofs BIB018 Manual Separation logic encoding BIB019 N/A Interval-based reasoning; linearisability is proved for coarse-grained abstractions and refactoring (where the structure of the program is modified without changing its logical meaning) Colvin 2007, 2009] . These proofs are not mechanised, but there is potential to perform mechanisation using proof tools such as QED BIB015 ]. BIB004 BIB010 ] present a number of derivations of nonblocking algorithms and develop a number of special-purpose reduction theorems for derivation . However, these derivations aim to preserve lock-freedom (a progress property) , as opposed to linearisability. BIB008 and Vechev et al. [2009] present a tool-assisted derivation method based on bounded model checking. Starting with a sequential linkedlist set, they derive several variations of set algorithms implemented using Double Compare-and-Swap (DCAS) and CAS instructions, as well as variations that use marking schemes. Although their methods allow relatively large state spaces to be searched, these state spaces are bounded in size, and hence, only finite executions by a fixed number of processes are checked, that is, linearisability of the final algorithms derived cannot be guaranteed. More recently, BIB017 has presented a derivation of the Treiber stack and Michael-Scott queue in a refinement calculus framework BIB002 . Jonsson defines linearisability as A program P is linearisable if and only if atomic{P} is refined by P. [Jonsson 2012, Definition 3.1] Reduction-style commutativity checks are used to justify splitting the atomicity at each stage. With this interpretation of linearisability, Jonsson is able to start by treating the entire concrete operation as a single atomic transition, then incrementally split its atomicity into finer-grained statements.
Verifying Linearisability: A Comparative Survey <s> An Optimistic Set <s> Reserve ( interval? : Interval; until! : Time; report! : Report ) A reservation is made for a period of time (interval?), and returns the expiry time of the new reservation (until!). A client can cancel a reservation by making a new reservation in which interval? is zero; this will then be removed by the next scavenge. Definition ∗ Reservesuccess ∆RS interval? : Interval until! : Time until! = now + interval? shutdown′ = shutdown resns′ = resns⊕ {clientnum 7→ until!} Reports † Reserve = (Reservesuccess ∧ Success) ⊕ TooManyUsers ⊕ NotAvailable ⊕ NotKnownUser The client cannot be a guest user. The reservation must expire before the shutdown time or be for a zero interval. There may be no space for new reservations. ∗ In the Definition section,⊕ is used for relational overriding. Any existing entry under clientnum in resns is removed and a new entry with value until! is added. † In the Reports section, ⊕ is applied to schemas for schema overriding. Mathematically, this can be defined as A ⊕ B = (A ∧ ¬ pre B) ∨ B, where pre B is the precondition of the B schema in which all after state and output components have been existentially quantified. In practice this means that the error conditions are ‘checked’ in reverse order. 78 Formal Specification and Documentation using Z 4.5.5 Service charges The basic parameters are supplemented by two hidden parameters, an operation identifier op? and the cost of executing the operation cost!. The latter can conveniently be defined in terms of natural numbers. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> An Optimistic Set <s> List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked). <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> An Optimistic Set <s> In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. ::: ::: We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> An Optimistic Set <s> Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> An Optimistic Set <s> Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities. <s> BIB005
In this section, we present a simplified version of Heller et al.'s concurrent set algorithm BIB002 ] (see Figure 9 ) operating on a shared linked list, that is sorted in strictly ascending values order. Locks are used to control concurrent access to list nodes. The algorithm consists of operations add and remove that use auxiliary operation locate to optimistically determine the position of the node to be inserted/deleted from the linked list. Each node of the list consists of fields val, next, mark, and lock, where val stores the value of the node, next is a pointer to the next node in the list, mark denotes the marked bit, 5 and lock stores the identifier of the process that currently holds the lock to the node (if any). The lock field of each node only prevents modification to the node; it is possible for processes executing locate and contains to read values of locked nodes when they traverse the list. Two dummy nodes with values −∞ and ∞ are used at the start (Head) and end (Tail) of the list, and all values v inserted into the set are assumed to satisfy −∞ < v < ∞. Operation locate(x) is used to obtain pointers to two nodes pred (the predecessor node) and curr (the current node). A call to locate(x) operation traverses the list ignoring locks, acquires locks once a node with value greater than or equal to x is reached, then validates the locked nodes. If the validation fails, the locks are released and the search for x is restarted. When locate(x) returns, both pred and curr are locked by the calling process, the value of pred is always less than x, and the value of curr may either be greater than x (if x is not in the list) or equal to x (if x is in the list). Operation add(x) calls locate(x), then if x is not already in the list (i.e., value of the current node n3 is strictly greater than x), a new node n2 with value field x is inserted into the list between n1 and n3 and true is returned. If x is already in the list, add(x) does nothing and returns false. Operation remove(x) also starts by calling locate(x), then if x is in the list the current node n2 is removed and true is returned to indicate that x was found and removed. If x is not in the list, the remove operation does nothing and returns false. Note that operation remove(x) distinguishes between a logical removal, which sets the marked field of n2 (the node corresponding to x), and a physical removal, which updates the next field of n1 so that n2 is no longer reachable. As a concrete example, consider the linked list in Figure 10 (a), which represents the set {3, 18, 77}, and an execution of add(42) by process p without interference. Execution starts by calling locate(42) and once this returns, n1 p and n2 p are set as shown in Figure 10 (b). Having found and locked the correct location for the insertion, process p tests to see that the value is not already in the set (line A2), then creates a new unmarked node n3 p with value 42 and next pointer n3 p (see Figure 10(c) ). Then by executing A4, the executing process sets the next pointer of n1 p to n2 p , linearising a successful add operation (see Figure 10(d) ). Thus, provided no remove(42) operations are executed, any other add(42) operation that is started after A4 has been executed will return false. After the linearisation, process p releases the locks on n1 p and n3 p and returns true to indicate the operation was successful. Now consider the execution of remove (18) by process p on the set {3, 18, 77} depicted by the linked list in Figure 11(a) , where the process executes without interference. Like add, operation remove(18) operation first calls locate(18), which returns the state depicted in Figure 11(b) . At R2, a check is made that the element to be removed (given by node n2 p ) is actually in the set. Then, the node n2 p is removed logically by setting its marked value to true (line R3), which is the linearisation point of remove (see Figure 11(c) ). After execution of the linearisation point, operation remove sets n3 p to be the next pointer of the removed node (line R4), and then node n2 p is physically removed by setting the next pointer of n1 to n3 p (see Figure 11(d) ). Then, the held locks are released and true is returned to indicate that the remove operation has succeeded. Note that although 18 has been logically removed from the set in Figure 11 (c), no other process is able to insert 18 to the set until the marked node has also been physically removed (as depicted in Figure 11(d) ), and the lock on n1 p has been released. Verifying add and remove Operations. Verifying correctness of add and remove, which have fixed linearisation points, is relatively straightforward because the globally visible effect of both operations may be determined without having to refer to the future states of the linked list. The refinement-based methods (Section 3.3) verify correctness using forward simulation and the state augmentation methods (Section 3.4) modify the abstract state directly. We present outlines of the proofs using the simulation-based methods of (Section 4.2 ), te refinement-based method of BIB004 (Section 4.3) , and the auxiliary variable method of (Section 4.4) . To unify the presentation, we translate the PVS formulae from and the Vafeiadis' RGSep notation BIB003 into Z BIB001 ], which is the notation used by Derrick et al. Inevitably, this causes some of the benefits of a proof method to be lost; we discuss the effect of the translation and the benefits provided by the original framework where necessary. Full details on modelling concurrent algorithms in Z are given by BIB005 . To reason about linked lists, memory must be explicitly modelled, and hence, the concrete state CState is defined as follows, where Label and Node are assumed to be the types of a program counter label and node, respectively. Each atomic program statement is represented by a Z schema. The schema for the statements in Figure 9 labelled A5 and A7 executed by process p are modelled by Add5 p and Add7 p , respectively. Notation CState imports both unprimed and primed versions of the variables of C State into the specification enabling one to identify specifications that modify C State; unprimed and primed variables are evaluated in the current and next states, respectively. Using the Object-Z ] convention, we assume that variables v = v for every variable v unless v = k is explicitly defined for some value k.
Verifying Linearisability: A Comparative Survey <s> Method 3: Augmented States <s> In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. ::: ::: We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Method 3: Augmented States <s> Locating linearization points (LPs) is an intuitive approach for proving linearizability, but it is difficult to apply the idea in Hoare-style logic for formal program verification, especially for verifying algorithms whose LPs cannot be statically located in the code. In this paper, we propose a program logic with a lightweight instrumentation mechanism which can verify algorithms with non-fixed LPs, including the most challenging ones that use the helping mechanism to achieve lock-freedom (as in HSY elimination-based stack), or have LPs depending on unpredictable future executions (as in the lazy set algorithm), or involve both features. We also develop a thread-local simulation as the meta-theory of our logic, and show it implies contextual refinement, which is equivalent to linearizability. Using our logic we have successfully verified various classic algorithms, some of which are used in the java.util.concurrent package. <s> BIB002
The method of , requires the following steps. (1) Introduce auxiliary variables to the existing program, at least one of which is an abstraction of the data type in question, then define the abstract operations on these auxiliary variables that are required to be implemented by the concrete program. Vafeiadis' proofs are performed using the RGSep framework BIB001 . In this article, for uniformity, we translate the example expressed in RGSep into Z. For the add operation, a state space is extended with a fresh variables AbsRes (representing the abstract result) and S (representing the abstract set) to obtain an augmented state AugState. In addition, the fixed linearisation points A5 and A7 are augmented as follows, where the brackets < > delimit atomicity. ... A5: <n1.next := n2; AbsRes := (x / ∈ S); S := (S ∪ {x})> ... A7: <res := false; AbsRes := (x / ∈ S); S := (S ∪ {x})> ... . Note that at A7, the auxiliary code sets AbsRes to false (i.e., x / ∈ S), and therefore the abstract set S remains unchanged. The remove operation is similar, therefore its details are elided. Assume that lock(n) returns the ID of the process that currently holds the lock on node n and that lock(n) = ∅ holds if no process has locked n. Furthermore, assuming that The method of Vafeiadis also requires substantial changes to cope with verification of the contains operation. In particular, auxiliary statements that are able to linearise the currently executing contains operations must be introduced to the remove operation. The augmented state introduces a further auxiliary variable OSet ⊆ P ×V ×B, where ( p, v, r) ∈ OSet if and only if process p is executing a contains operation with input v that has set its return value to r. This requires modification of environment actions that modify the shared state space. Operations Lock p , Unlock p , Add p , and Remove p are as given in Section 4.4. The Mark p action, which is an environment action for process p that marks a node, must also modify the abstract set S (as in Section 4.4) and the auxiliary OSet. In addition to setting the marked value to true and removing v from the abstract set, the executing process p also sets the return value of all processes in C ⊆ OSet that are currently executing a contains(v) to false, which linearises each of the processes in C. In addition, two environment steps that add and remove triples of type P×V ×B to/from the auxiliary variable OSet are introduced. These represent environment processes that invoke and complete a contains operation. The auxiliary code to the add and remove operations are as before, but a remove(x) operation must additionally linearise processes in OSet that are executing contains(x). Thus, statement R3 is augmented as follows: . . . R3: <n2.mark := true; AbsRes(this) := (x ∈ S); for each q ∈ OSet do if q.2 = n2.val then AbsRes(q) := false > . . . The augmented version of the contains operation is given next. 10 Like , details of the annotation for the proof outline are elided in the following, but the interested reader may consult BIB002 .
Verifying Linearisability: A Comparative Survey <s> The rely condition for process p is <s> Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> The rely condition for process p is <s> A language for parallel programming, with a primitive construct for synchronization and mutual exclusion, is presented. Hoare's deductive system for proving partial correctness of sequential programs is extended to include the parallelism described by the language. The proof method lends insight into how one should understand and present parallel programs. Examples are given using several of the standard problems in the literature. Methods for proving termination and the absence of deadlock are also given. <s> BIB002
which describes the potential global modifications that the environment of process p can make. With this encoding, one can clearly see that the rely condition is an abstraction of statements of add and remove that modify the global state. requires annotation of code using separation logic-style assertions. In addition, building on the framework of BIB001 , these assertions must be stable with respect to the rely conditions. The proof outlines for the lazy set are elided by ; however, they may be reconstructed from the other list examples in the thesis. We further adapt the proof outlines using Z-style notation. The invariants are formalised using the following predicates, where ls(x, A, y) converts the linked list from node x to node y into a sequence A (where we assume y is reachable from x), predicate sorted(A) holds if and only if A is sorted in ascending order, and s(A) returns the set of elements corresponding to A. Note that due to a typographical error, the case of the add operation that returns false is missing in ; however, it can be reconstructed from the remove operation (see Figure 12 ). Of course, the annotations in Figure 12 are not available in Z, but can be encoded as invariants on the overall specification by explicitly introducing a program counter variable. For example, given that pc( p) denotes the program counter for process p, whose value is a program label, the assertion at A7 can be encoded as a predicate: Such proof obligations must be resilient to interference from other processes BIB002 , and hence, one must verify that the following holds for each p, q ∈ P such that p = q, and Env q ∈ {Lock q , Unlock q , Mark q , Add q , Remove q }.
Verifying Linearisability: A Comparative Survey <s> Discussion <s> Despite years of research, the design of efficient nonblocking algorithms remains difficult. A key reason is that current shared-memory multiprocessor architectures support only single-location synchronisation primitives such as compare-and-swap (CAS) and load-linked/store-conditional (LL/SC). Recently researchers have investigated the utility of double-compare-and-swap (DCAS)--a generalisation of CAS that supports atomic access to two memory locations -- in overcoming these problems. We summarise recent research in this direction and present a detailed case study concerning a previously published nonblocking DCAS-based double-ended queue implementation. Our summary and case study clearly show that DCAS does not provide a silver bullet for nonblocking synchronisation. That is, it does not make the design and verification of even mundane nonblocking data structures with desirable properties easy. Therefore, our position is that while slightly more powerful synchronisation primitives can ave a profound effect on ease of algorithm design and verification, DCAS does not provide sufficient additional power over CAS to justify supporting it in hardware. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Discussion <s> This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Discussion <s> An execution containing operations performing queries or updating a concurrent object is linearizable w.r.t an abstract implementation (called specification) iff for each operation, one can associate a point in time, called linearization point, such that the execution of the operations in the order of their linearization points can be reproduced by the specification. Finding linearization points is particularly difficult when they do not belong to the operations's actions. This paper addresses this challenge by introducing a new technique for rewriting the implementation of the concurrent object and its specification such that the new implementation preserves all executions of the original one, and its linearizability (w.r.t. the new specification) implies the linearizability of the original implementation (w.r.t. the original specification). The rewriting introduces additional combined methods to obtain a library with a simpler linearizability proof, i.e., a library whose operations contain their linearization points. We have implemented this technique in a prototype, which has been successfully applied to examples beyond the reach of current techniques, e.g., Stack Elimination and Fetch&Add. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Discussion <s> Linearisability has become the standard correctness criterion for concurrent data structures, ensuring that every history of invocations and responses of concurrent operations has a matching sequential history. Existing proofs of linearisability require one to identify so-called linearisation points within the operations under consideration, which are atomic statements whose execution causes the effect of an operation to be felt. However, identification of linearisation points is a non-trivial task, requiring a high degree of expertise. For sophisticated algorithms such as Heller et al's lazy set, it even is possible for an operation to be linearised by the concurrent execution of a statement outside the operation being verified. This paper proposes an alternative method for verifying linearisability that does not require identification of linearisation points. Instead, using an interval-based logic, we show that every behaviour of each concrete operation over any interval is a possible behaviour of a corresponding abstraction that executes with coarse-grained atomicity. This approach is applied to Heller et al's lazy set to show that verification of linearisability is possible without having to consider linearisation points within the program code. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Discussion <s> Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof. <s> BIB005
With the advances in linearisability verification, correctness of the optimistic set is straightforward, and there is even the possibility of automating the verification process (e.g., by extending methods of BIB002 and BIB003 9 ). We have presented a detailed account of three methods that manually identify the linearisation points, as well as abstraction relations and invariants. These methods are based on differing formal foundations: method 1 uses I/O Automata, method 2 uses Z, and method 3 uses RGSep. To simplify comparison between these approaches, we have translated each of these to Z. An advantage of RGSep (method 3) that is lost in the translation to Z is the ability to syntactically distinguish between predicates that may be affected by the environment. However, as already discussed, the majority of predicates in each assertion are nonlocal, and hence, the loss of this feature does not overly affect the complexity of the proof. Methods 1 and 2 are mechanised in theorem provers PVS and KIV, respectively. Tool support for extensions to method 3 have been developed, and there is a possibility for mechanising proofs using method 3 directly, but this has thus far not been done. Each of the methods supports process-local verification. Method 1 proves invariants that describe the behaviours of the other processes, method 2 explicitly encodes interference freedom conditions in the refinement relation, and method 3 additionally supports compositionality via rely-guarantee reasoning. The underlying challenges in verifying linearisability are manifested in each of the proof methods in essentially the same way. Namely, the identification of the correct abstraction relations and invariants, correct identification of linearisation points, and the corresponding abstract changes that occur at each linearisation point. These also remain the difficult aspects of a proof to automate. The lazy set represents a class of algorithms that can only be verified by allowing an operation to set the linearisation point of another, and its proof is therefore more involved. The methods we have considered tackle the problem using seemingly different techniques. However, translating each proof to a uniform framework, in this case Z, one can see that the underlying ideas behind the methods are similar, and experience in verification using one of these methods can aid in the proof in another. Identifying the linearisation points and understanding the effects of linearisation on object at hand remains the difficult task. Here, further complications arise because external operations potentially set the linearisation point of the current operation. BIB004 present a method for verifying linearisability using an interval-based framework, which aims to capture the fact that operations like contains must only observe the value being checked as being in the set at some point within its interval of execution. The logic is able to prove properties of the form Here, beh p (contains(x), true) defines an interval-based semantics of the behaviour of contains(x) executed by process p that returns true, rely p is an interval predicate that defines the behaviour of the environment of p, and (x ∈ absSet) is an interval predicate that holds if x ∈ absSet is true at some point in the given interval. Such proofs allow one to avoid backward reasoning because the entire interval of execution is taken into account. The Herlihy-Wing queue represents a class of algorithms that can only be proved linearisable by considering the future behaviours of the currently executing operation calls, further complicated by the potential for these operations to modify the data structure at hand. Reasoning must therefore appeal to backward simulation or prophecy variables. use a monolithic backward simulation relation that captures all possible future behaviours at the abstract level. The method has been shown to be complete for verifying linearisability; however, developing and verifying such a simulation relation is a complex task. The aspect-oriented proof method decomposes a linearisability proof for purely blocking algorithms into simpler aspects that are (in theory) easier to verify BIB005 . However, it is currently not clear whether every data structure can be decomposed into such aspects. These are not the only methods capable of handling future linearisation pointstwo other methods, both based on backward simulation, could be applied to verify the Herlihy-Wing queue. We have not presented a detailed comparison here as they have not verified the Herlihy-Wing queue (i.e., we do not attempt a proof using their methods ourselves). Groves et al.'s backward simulations against canonical automata can cope with future linearisation points BIB001 . have continued to improve the simulation-based methods (Section 6.1), and incorporated the core theory into an interval-based rely-guarantee framework. Here, linearisability is reencoded using possibilities, which describe the orders of completions of pending operation calls. The use of symbolic execution simplifies mechanisation of their approach within KIV. Their methods have been applied to verify correctness of an array-based multiset with insert, delete, and lookup operations. An interesting aspect of this algorithm is that it is possible for a lookup of an element x to return false even if the element x is in the array in all concrete states throughout the execution of the lookup operation. Their methods have been linked to the completeness results of .
Verifying Linearisability: A Comparative Survey <s> The contains Operation <s> Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> The contains Operation <s> The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> The contains Operation <s> Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> The contains Operation <s> Verifying the implementation of concurrent objects essentially proves the fine-grained implementation of object methods refines the corresponding abstract atomic operations. To simplify the specifications and proofs, we usually need auxiliary history and prophecy variables to record historical events and to predict future events, respectively. Although the meaning of history variables is obvious, the semantics of prophecy variables and the corresponding auxiliary code is tricky and has never been clearly spelled out operationally. ::: ::: In this paper, we propose a new language construct, future blocks, that allows structural use of prophecy variables to refer to events in the future. The semantics of the construct is simple and easy to understand, without using any form of oracle or backward reasoning. Our language also separates auxiliary states from physical program states. With careful syntactic constraints, it ensures the use of history and prophecy variables would not affect the behaviors of the original program, which justifies the verification method based on the use of auxiliary variables. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> The contains Operation <s> Expression evaluation in programming languages is normally assumed to be deterministic; however, if an expression involves variables that are being modified by the environment of the process during its evaluation, the result of the evaluation can be non-deterministic. Two common scenarios in which this occurs are concurrent programs within which processes share variables and real-time programs that interact to monitor and/or control their environment. In these contexts, although any particular evaluation of an expression gives a single result, there is a range of possible values that could be returned depending on the relative timing between modification of a variable by the environment and its access within the expression evaluation. To compare the semantics of non-deterministic expression evaluation, one can use the set of possible values the expression evaluation could return. This paper formalizes three approaches to non-deterministic expression evaluation, highlights their commonalities and differences, shows the relationships between the approaches and explores conditions under which they coincide. Modal operators representing that a predicate holds for all possible evaluations and for some possible evaluation are associated with each of the evaluation approaches, and the properties and relationships between these operators are investigated. Furthermore, a link is made to a new notation used in reasoning about interference. <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> The contains Operation <s> Linearisability has become the standard correctness criterion for concurrent data structures, ensuring that every history of invocations and responses of concurrent operations has a matching sequential history. Existing proofs of linearisability require one to identify so-called linearisation points within the operations under consideration, which are atomic statements whose execution causes the effect of an operation to be felt. However, identification of linearisation points is a non-trivial task, requiring a high degree of expertise. For sophisticated algorithms such as Heller et al's lazy set, it even is possible for an operation to be linearised by the concurrent execution of a statement outside the operation being verified. This paper proposes an alternative method for verifying linearisability that does not require identification of linearisation points. Instead, using an interval-based logic, we show that every behaviour of each concrete operation over any interval is a possible behaviour of a corresponding abstraction that executes with coarse-grained atomicity. This approach is applied to Heller et al's lazy set to show that verification of linearisability is possible without having to consider linearisation points within the program code. <s> BIB006
A process executing contains(x) traverses the list (ignoring locks) from Head, stopping as soon as a node with value greater or equal to x is found. Value true is returned if the node is unmarked and its value is equal to x, otherwise false is returned. Unlike locate, the contains operation does not acquire locks, and performs at most one traversal of the linked list. When verifying linearisability of the contains operation, atomicity constraints of an implementation often dictate that the expression in C4 be split. Because the order in which the variables within a nonatomic expression are accessed is not known, there are two possible evaluation orders: Figure 14 and Figure 15 , both of which use a temporary variable r1. To verify linearisability of the original operation in Figure 13 , both orders of evaluation must be verified. However, BIB003 and only consider the variation in Figure 14 , while only consider Figure 15 . It is also possible to consider both possibilities at the same time using logics that enable reasoning about the nondeterminism in expression evaluation under concurrency BIB005 , which is the approach taken by BIB006 . Unlike the add and remove operations, none of the statements of contains qualify as valid linearisation points. To see this, we consider the two most suitable candidates, that is, C4a and C4b, and present counterexamples to show that neither of these are valid. The essence of the issue is that a verifier must decide whether or not the contains will return true or false (i.e., as its future behaviour) by considering the state of the shared object when C4a or C4b is executed, and this is impossible. Suppose C4a is chosen as the linearisation point of the contains operation. Now consider the state of the shared linked list in Figure 16 (a), where process p is executing contains(50) and has just exited its loop because curr p .val ≥ 50, but has not yet executed statement C4a. Suppose another process q executes add(50) to completion. This results in the linked list in Figure 16(b) , which corresponds to an abstract state {3, 18, 50}. Execution of C4a by process p from this state will set r1 p to false, and hence the contains(50) will return false, even though the element 50 is in the set (corresponding to the shared linked list) when C4a is executed. Similarly, suppose C4b is chosen to be the linearisation point of the contains operation. Assume there are no other concurrent operations and that process p is executing contains(77) on the linked list in Figure 11 (a), and execution has reached (but not yet executed) statement C4b. This results in the state of the linked list in Figure 16 (c). Suppose another process q executes a remove(77) operation to completion. This results in Figure 16(d) , corresponding to the abstract queue {3, 18}. Now, when process p executes C4b, it will set res p to true, and hence, return true even though 77 is not in the abstract set corresponding to the shared linked list when C4b is executed. Therefore, neither C4a nor C4b are appropriate linearisation points for contains. Proving linearisability it turns out must consider the execution of other operations; that is, the linearisation point cannot be determined statically by examining the statements within the contains operation alone. Here, contains may be linearised by the execution of an add or a remove operation. As Colvin et al. point out: The key to proving that Heller et al.'s lazy set is linearisable is to show that, for any failed contains(x) operation, x is absent from the set at some point during its execution . That is, within any interval in which contains(x) executes and returns true, there is some point in the interval such that the abstract set corresponding to the shared linked list contains x. Similarly, if contains(x) returns false, there is some point in the interval of execution such that the corresponding abstract set does not contain x. The statement that removes x from the set is also responsible for linearising any contains(x) operations that may return false. From a refinement perspective, the abstract specification resolves its nondeterminism earlier than the concrete implementation, resulting in a future concrete transition that cannot be matched with an abstract transition when the forward simulation rule (2) is used. Instead proofs must be performed using backward simulation BIB002 , which for a nonstuttering transition generates a proof obligation of the form AOp p . This states that, for any process p, if COp p can transition from τ to τ and τ is related by rep to some abstract state σ , then there must exist an abstract state σ such that rep holds between τ and σ and AOp p can transition from σ to σ . Such proofs involve reasoning from the end of computation to the start, and hence, are more complicated than forward simulation. Equivalent to this is an encoding using prophecy variables BIB001 BIB004 .
Verifying Linearisability: A Comparative Survey <s> CASE STUDY 3: THE HERLIHY-WING QUEUE <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> CASE STUDY 3: THE HERLIHY-WING QUEUE <s> The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> CASE STUDY 3: THE HERLIHY-WING QUEUE <s> Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> CASE STUDY 3: THE HERLIHY-WING QUEUE <s> Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof. <s> BIB004
We now discuss the third type of algorithm, where none of the atomic program statements qualify as linearisation points. Instead, execution of an atomic statement that linearises an operation depends on future executions, and in addition, the potential linearisation points may modify the representation of the data structure at hand. One such algorithm is the array-based queue by BIB001 , which we present in Figure 18 . The abstract object corresponding to a concrete state cannot be determined by examining the shared data structure (in this case a shared array) alone-one must additionally take into consideration the currently executing operations and their potential future executions. As these operations may potentially modify the shared data structure in the future, each concrete state ends up corresponding to a set of abstract states. In Figure 18 , each line corresponds to a single atomic statement, including for example D1, which consists of several assignments. These operations operate on an infinite array, AR (initially null at each index), and use a single shared global counter, back (initially 0) that points to the end of the queue. An enqueue operation (enq) atomically increments back (line E1) and stores the old value of back locally in a variable k. Thus, executing E1 allows the executing process to reserve the index of back before the increment as the location at which the enqueue will be performed. The enqueued value is stored at E2. A dequeue operation (deq) stores back locally in lback, then traverses AR from the front (i.e., from index 0) using k. As it traverses AR, it swaps the value of AR at k with null (D3). If a nonnull element is encountered (D4), then this value is returned as the head of the queue. If the traversal reaches lback (i.e., the local copy of back read at line D1) and a nonnull element has not been found, then the operation restarts. Note that deq is partial BIB001 in that it does not terminate if AR is null at every index. In particular, a dequeue only terminates if it returns a value from the queue. To see why verifying linearisability of the algorithm is difficult, we first show that neither E1 nor E2 qualify as a valid linearisation points for enq. It is straightforward to derive a similar counterexample for E3. Suppose E1 is picked as the linearisation point and consider the following complete execution, where p, q, r ∈ P. Assume p and q enqueue v 1 and v 2 , respectively. Although E1 p is executed before E1 q , the dequeue operation returns v 2 before v 1 , contradicting FIFO ordering, and hence, E1 cannot be a linearisation point. Now suppose E2 is picked as the linearisation point and consider the following complete execution: Now, E2 p is executed before E2 q , but deq returns v 2 before v 1 has been dequeued. The histories corresponding to both executions are, however, linearisable because the operation calls enq p , enq q and deq r overlap, allowing their effects to occur in any order. In particular, both (11) and (12) Aside from the proof sketch in Herlihy-Wing's original paper BIB001 , there are two known formal proofs of linearisability: BIB003 (which uses backward simulation) and BIB004 (which decomposes the problem into several aspects). Henzinger et al.'s main ordering property uses prophecy variables, and hence, must perform reasoning similar to backward simulation. Backward simulation and prophecy variables are known to be equivalent formulations that allow the future nondeterminism to be taken into account BIB002 . Both allow one to capture the fact that in order to decide whether the enqueue operation has taken effect, one must consider the state of all currently executing operations. have shown that backward simulation is sufficient for proving linearisability; that is, backward simulation with the addition of auxiliary history variables is a complete method for proving linearisability. These methods, however, do not show how such a simulation relation may be constructed, and hence, creativity is required on the part of the verifier to develop the correct simulation relation. As already discussed, each concrete state corresponds to multiple abstract queues depending on the states of the executing operations. Schellhorn et al.'s approach is to encode, within the simulation relation, all possible ways in which the currently executing enq operations can complete, as well as all possible ways in which these could be dequeued. To this end, they construct a so-called observation tree. In effect, this constructs the set of all possible queues that could relate to the current concrete queue based on the state of AR and the pending concurrent operations. The proof methods build on previous work on potential linearisation points (Section 5.3), the difference here is that linearising external operations modifies the data structure in question.
Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> The emerging generation of database systems and general purpose operating systems share many characteristics: object orientation, a stress on distribution, and the utilization of concurrency to increase performance. A consequence is that both types of systems are confronted with the problem of maintaining the consistency of multi-component distributed applications in the face of concurrency and failures. Moreover, large applications can be expected to combine database and general purpose components. This paper reviews four basic approaches to the distributed consistency problem as it arises in such hybrid applications:• Transactional serializability, a widely used database execution model, which has been adapted to distributed and object-oriented settings by several research efforts.• Traditional operating systems synchronization constructs, such as monitors, used within individual system components, and with no system-wide mechanism for inter-object synchronization.• Linearizability, an execution model for object-oriented systems with internal concurrency proposed by Herlihy and Wing [HW90] (similarly restricted to synchronization within individual objects).• Virtual synchrony, a non-transactional execution model used to characterize consistency and correctness in groups of cooperating processes (or groups of objects, in object-oriented systems) [BJ87].We suggest that no single method can cover the spectrum of issues that arise in general purpose distributed systems, and that a composite approach must therefore be adopted. The alternative proposed here uses virtual synchrony and linearizability at a high level, while including transactional mechanisms and monitors for synchronization in embedded subsystems. Such a hybrid solution requires some changes to both the virtual synchrony and transactional model, which we outline. The full-length version of the paper gives details on this, and also explores the problem in the context of a series of examples.The organization of the presentation is as follows. We begin by reviewing the database data and execution models and presenting the transactional approach to concurrency control and failure atomicity. We then turn to distributed systems, focusing on aspects related to synchronization and fault-tolerance and introducing virtually synchronous process groups. The last part of the paper focuses on an object oriented view of distributed systems, and suggests that the linearizability model of Herlihy and Wing might be used to link the virtual synchrony approach with transactions and "internal" synchronization mechanisms such as monitors, arriving at a flexible, general approach to concurrency control in systems built of typed objects. We identify some technical problems raised by this merging of models and propose solutions. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Whereas serializability captures database consistency requirements and transaction correctness properties via a single notion, recent research has attempted to come up with correctness criteria that view these two types of requirements indepen- dently. The search for more flexible correctness criteria is partly motivated by the introduction of new transaction models that extend the traditional atomic transaction model. These extensions came about because the atomic transac- tion model in conjunction with serializability is found to be very constraining when applied in advanced applications, such as, design databases, that function in distributed, cooperative, and heterogeneous environments. In this paper, we develop a taxonomy of various correctness criteria that focus on database consistency requirements and transaction correctness properties from the viewpoint of what the different dimensions of these two are. This taxonomy allows us to categorize correctness criteria that have been proposed in the lit- erature. To help in this categorization, we have applied a uniform specification technique, based on ACTA, to express the various criteria. Such a categorization helps shed light on the similarities and differences between different criteria and to place them in perspective. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> The paper is a tutorial on fault-tolerance by replication in distributed systems. We start by defining linearizability as the correctness criterion for replicated services (or objects), and present the two main classes of replication techniques: primary-backup replication and active replication. We introduce group communication as the infrastructure providing the adequate multicast primitives to implement either primary-backup replication, or active replication. Finally, we discuss the implementation of the two most fundamental group multicast primitives: total order multicast and view synchronous multicast. <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Drawing ideas from previous authors, we present a new non-blocking concurrent queue algorithm and a new two-lock queue algorithm in which one enqueue and one dequeue can proceed concurrently. Both algorithms are simple, fast, and practical; we were surprised not to find them in the literature. Experiments on a 12-node SGI Challenge multiprocessor indicate that the new non-blocking queue consistently outperforms the best known alternatives; it is the clear algorithm of choice for machines that provide a universal atomic primitive (e.g., compare_and_swap or load_linked/store_conditional). The two-lock concurrent queue outperforms a single lock when several processes are competing simultaneously for access; it appears to be the algorithm of choice for busy queues on machines with non-universal atomic primitives (e.g., test_and_set). Since much of the motivation for non-blocking algorithms is rooted in their immunity to large, unpredictable delays in process execution, we report experimental results both for systems with dedicated processors and for systems with several processes multiprogrammed on each processor. <s> BIB006 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation. <s> BIB007 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Despite years of research, the design of efficient nonblocking algorithms remains difficult. A key reason is that current shared-memory multiprocessor architectures support only single-location synchronisation primitives such as compare-and-swap (CAS) and load-linked/store-conditional (LL/SC). Recently researchers have investigated the utility of double-compare-and-swap (DCAS)--a generalisation of CAS that supports atomic access to two memory locations -- in overcoming these problems. We summarise recent research in this direction and present a detailed case study concerning a previously published nonblocking DCAS-based double-ended queue implementation. Our summary and case study clearly show that DCAS does not provide a silver bullet for nonblocking synchronisation. That is, it does not make the design and verification of even mundane nonblocking data structures with desirable properties easy. Therefore, our position is that while slightly more powerful synchronisation primitives can ave a profound effect on ease of algorithm design and verification, DCAS does not provide sufficient additional power over CAS to justify supporting it in hardware. <s> BIB008 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> This paper shows for the first time that elimination, a scaling technique formerly applied only to counters and LIFO structures, can be applied to FIFO data structures, specifically, to linearizable FIFO queues. We show how to transform existing nonscalable FIFO queue implementations into scalable implementations using the elimination technique, while preserving lock-freedom and linearizablity.We apply our transformation to the FIFO queue algorithm of Michael and Scott, which is included in the Java™ Concurrency Package. Empirical evaluation on a state-of-the-art CMT multiprocessor chip shows that by using elimination as a backoff technique for the Michael and Scott queue algorithm, we can achieve comparable performance at low loads, and improved scalability as load increases. <s> BIB009 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or linearizability. In this paper, we consider the following fundamental question: What guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs. <s> BIB010 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms. <s> BIB011 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities. <s> BIB012 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique. <s> BIB013 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Distributed computing is at the heart of many applications. It arises as soon as one has to solve a problem in terms of entities -- such as processes, peers, processors, nodes, or agents -- that individually have only a partial knowledge of the many input parameters associated with the problem. In particular each entity cooperating towards the common goal cannot have an instantaneous knowledge of the current state of the other entities. Whereas parallel computing is mainly concerned with 'efficiency', and real-time computing is mainly concerned with 'on-time computing', distributed computing is mainly concerned with 'mastering uncertainty' created by issues such as the multiplicity of control flows, asynchronous communication, unstable behaviors, mobility, and dynamicity. While some distributed algorithms consist of a few lines only, their behavior can be difficult to understand and their properties hard to state and prove. The aim of this book is to present in a comprehensive way the basic notions, concepts, and algorithms of distributed computing when the distributed entities cooperate by sending and receiving messages on top of an asynchronous network. The book is composed of seventeen chapters structured into six parts: distributed graph algorithms, in particular what makes them different from sequential or parallel algorithms; logical time and global states, the core of the book; mutual exclusion and resource allocation; high-level communication abstractions; distributed detection of properties; and distributed shared memory. The author establishes clear objectives per chapter and the content is supported throughout with illustrative examples, summaries, exercises, and annotated bibliographies. This book constitutes an introduction to distributed computing and is suitable for advanced undergraduate students or graduate students in computer science and computer engineering, graduate students in mathematics interested in distributed computing, and practitioners and engineers involved in the design and implementation of distributed applications. The reader should have a basic knowledge of algorithms and operating systems. <s> BIB014 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof. <s> BIB015 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> To date, all concurrent search structures that can support predecessor queries have had depth logarithmic in m, the number of elements. This paper introduces the SkipTrie, a new concurrent search structure supporting predecessor queries in amortized expected O(log log u + c) steps, insertions and deletions in O(c log log u), and using O(m) space, where u is the size of the key space and c is the contention during the recent past. The SkipTrie is a probabilistically-balanced version of a y-fast trie consisting of a very shallow skiplist from which randomly chosen elements are inserted into a hash-table based x-fast trie. By inserting keys into the x-fast-trie probabilistically, we eliminate the need for rebalancing, and can provide a lock-free linearizable implementation. To the best of our knowledge, our proof of the amortized expected performance of the SkipTrie is the first such proof for a tree-based data structure. <s> BIB016 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> An execution containing operations performing queries or updating a concurrent object is linearizable w.r.t an abstract implementation (called specification) iff for each operation, one can associate a point in time, called linearization point, such that the execution of the operations in the order of their linearization points can be reproduced by the specification. Finding linearization points is particularly difficult when they do not belong to the operations's actions. This paper addresses this challenge by introducing a new technique for rewriting the implementation of the concurrent object and its specification such that the new implementation preserves all executions of the original one, and its linearizability (w.r.t. the new specification) implies the linearizability of the original implementation (w.r.t. the original specification). The rewriting introduces additional combined methods to obtain a library with a simpler linearizability proof, i.e., a library whose operations contain their linearization points. We have implemented this technique in a prototype, which has been successfully applied to examples beyond the reach of current techniques, e.g., Stack Elimination and Fetch&Add. <s> BIB017 </s> Verifying Linearisability: A Comparative Survey <s> CONCLUSIONS <s> In this paper we present a novel algorithm for concurrent lock-free internal binary search trees (BST) and implement a Set abstract data type (ADT) based on that. We show that in the presented lock-free BST algorithm the amortized step complexity of each set operation - Add, Remove and Contains - is O(H(n) + c), where H(n) is the height of the BST with n number of nodes and c is the contention during the execution. Our algorithm adapts to contention measures according to read-write load. If the situation is read-heavy, the operations avoid helping the concurrent Remove operations during traversal, and adapt to interval contention. However, for the write-heavy situations we let an operation help a concurrent Remove, even though it is not obstructed. In that case, an operation adapts to point contention. It uses single-word compare-and-swap (CAS) operations. We show that our algorithm has improved disjoint-access-parallelism compared to similar existing algorithms. We prove that the presented algorithm is linearizable. To the best of our knowledge, this is the first algorithm for any concurrent tree data-structure in which the modify operations are performed with an additive term of contention measure. <s> BIB018
There has been remarkable progress since Herlihy and Wing's original paper on linearisability BIB002 , and with the increasing necessity for concurrency, this trend is set to continue. The basic idea behind linearisability is simple, yet it provides a robust consistency condition applicable to a large number of algorithms, and in some cases precisely captures the meaning of atomicity BIB014 ]. For concurrent objects, linearisability has shown to coincide with contextual observational refinement BIB010 , ensuring that the behaviours of client objects are preserved. Linearisability is compositional in the sense that a set of objects is linearisable if each object in the set is linearisable BIB002 , making it an appealing property. Besides shared variable concurrent objects, linearisability has also been applied to distributed systems BIB003 ], databases BIB004 , and fault-tolerant systems BIB005 . This article considered verification of linearisability, and the associated proof methods that have been developed for it in the context of concurrent objects. Necessity of such proofs is alluded to by the subtleties in the behaviours of the algorithms that implement concurrent objects, and by the fact that its errors have been found in algorithms that were previously believed to be correct BIB008 . Current proof techniques continue to struggle with the scalability and as a result, only a handful of fine-grained algorithms have been formally verified to be linearisable. The longest fully verified algorithm (in terms of lines of code) is the Snark algorithm BIB008 . However, the number of lines of code is not an indicator of complexity, with even simple algorithms like Herlihy and Wing's queue BIB002 posing immense challenges BIB013 BIB015 due to the fact that future behaviour must be considered. Our survey has aimed to answer the questions that were posed in Section 1. We now return to these to discuss concluding remarks. Locality of the proof method. Each of the methods we have considered enable localised reasoning, only requiring the behaviour of a single process to be considered. However, interference must be accounted for in the invariants and refinement relations generated, complicating each verification step. Namely, one must show that holds locally and is preserved by the each step of an arbitrarily chosen process, and that it holds in the presence of interference from other processes. Compositionality of the proof method. Some methods have incorporated Jones-style rely-guarantee reasoning into their respective frameworks (e.g., RGSep and RGITL), allowing potential interference from the environment to be captured abstractly by a rely condition. An additional step of reasoning is required to show that the rely condition is indeed an abstraction of the potential interference, but once this is done, a reduction in the proof load is achieved via a reduction in the number of cases that must be considered. Contribution of the underlying framework. None of the existing frameworks thus far provide a silver bullet for linearisability verification. Identification of the linearisation points and appropriate representation relations remain the difficult aspects of a proof. If the verifier believes an algorithm to have fixed linearisation points, then it would be fruitful to attempt an initial verification using a tool such as the one provided by BIB011 . For more complex algorithms, using a setup such as the one provided by would allow invariants to be model checked prior to verification. On the other hand, BIB012 have developed a systematic method for constructing representation relations, invariants, and interference freedom conditions as well as proof obligations that enables process-local verification. Techniques specific to certain implementations (e.g., the Hindsight Lemma, aspect-oriented verification) enable some decomposition possibilities, but have not been generalised to cope with arbitrary implementations. Algorithms verified. A survey of these has been given in Section 3.2. There exist several other algorithms in the literature whose linearisability has been conjectured, but has not yet been formally verified. For the frameworks we have studied, the number of algorithms verified is not, however, a measure of its capabilities; rather, it is whether the framework can handle complex algorithms with future linearisation points such as the Herlihy-Wing queue. The verifications thus far, have only considered linear (flat) data structures. Recently, more challenging structures such as SkipTries BIB016 and binary search trees BIB018 ] have been developed. Their linearisability has been informally argued, but not mechanically verified. It is not easy to know exactly how the proof complexity increases for such data structures; however, the complex nature of the underlying algorithm and the abstract representations suggest that the proofs will also be more complex. Mechanisation. Many of the methods described in this article have additional tool support that support mechanical validation of the proof obligations, reducing the potential for human error. In some cases, automation has been achieved, reducing human effort, but these are currently only successful for algorithms with fixed linearisation points and a limited number of algorithms with external linearisation points. Completeness. Completeness of a proof method is clearly a desirable qualityespecially for proofs of linearisability, which require considerable effort. Backward simulation alone is known to be complete for verifying linearisability against an abstract sequential specification BIB013 . Furthermore, a combination of forward and backward simulations is known to be complete for data refinement BIB007 , and combining auxiliary and prophecy variables is known to be complete for reasoning about past and future behaviour BIB001 . Completeness of a method does not guarantee simpler proofs, as evidenced by the maximal backward simulation constructed by BIB013 to prove linearisability. The completeness results BIB013 show that by using the global theory any linearisable algorithm can be proved correct. This shows that for every linearisable object, a backward simulation in between abstract and concrete specification can be found. This result does not, however, directly give one a way of constructing this backward simulation. This is common to all completeness results: they state the existence of a proof within a particular framework, but not the way of finding this proof. That such proofs can for individual instances indeed be found, is exemplified by the highly nontrivial case study of the paper. Future Directions. Despite the numerous advances in verification methodologies, formal correctness proofs of concurrent algorithms in a scalable manner remains an open problem. This in turn affects verification of specific properties such as linearisability. The rate at which new algorithms are developed far outpaces the rate at which these algorithms are formally verified. However, as concurrent implementations become increasingly prevalent within programming libraries (e.g., java.util.concurrent) the need for formal verification remains important. So what will future algorithms look like? To reduce sequential bottlenecks, there is no doubt that concurrent objects of the future will continue to become more sophisticated with more subtle (architecture-specific) optimisations becoming prevalent. Proving linearisability of such algorithms will almost certainly require consideration of some aspect of future behaviour. It is therefore imperative that verification techniques that are able to handle this complex class of algorithms continue to be improved. The frameworks themselves must continue to integrate the various methods for proof decomposition (e.g., Section 3.1). For example, have developed a framework that combines interval temporal logic, rely-guarantee and simulation proofs. Further simplifications could be achieved by extending the framework with aspects of separation logic. In some cases, decomposition of a proof into stages, for example, using reduction, or interval-based abstraction has been useful, where the decomposition not only reduces the number of statements that must be considered, but also transfers the algorithm from a proof that requires consideration of external linearisation points to a proof with fixed linearisation points. Until a scalable generic solution is found, it is worthwhile pursuing problem-specific approaches (e.g., BIB015 and BIB017 ). Another avenue of work is proof modularisation. To explain this, consider the elimination queue BIB009 , which embeds an elimination mechanism (implemented as an array) on top of the queue by BIB006 (with some modifications). Although linearisability of Michael and Scott's queue is well studied, current techniques require the entire elimination queue data structure to be verified from scratch. Development of modular proof techniques would enable linearisability proofs to be lifted from low-level data structures to more complex (optimised) versions. New results such as parameterised linearisability suggest that modular concurrent objects and associated proof techniques will continue to evolve. Next, we discuss some additional aspects surrounding correctness of concurrent objects.
Verifying Linearisability: A Comparative Survey <s> Model <s> The memory consistency model of a system affects performance, programmability, and portability. We aim to describe memory consistency models in a way that most computer professionals would understand. This is important if the performance-enhancing features being incorporated by system designers are to be correctly and widely used by programmers. Our focus is consistency models proposed for hardware-based shared memory systems. Most of these models emphasize the system optimizations they support, and we retain this system-centric emphasis. We also describe an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations. <s> BIB001 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> A multiprocess program executing on a modern multiprocessor must issue explicit commands to synchronize memory accesses. A method is proposed for deriving the necessary commands from a correctness proof of the underlying algorithm in a formalism based on temporal relations among operation executions. <s> BIB002 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> A non-blocking program is one that uses non-blocking primitives, such as load-linked/store-conditional and compare-and-swap, for synchronisation instead of locks so that no process is ever blocked. According to their progress properties, non-blocking programs may be classified as wait-free, lock-free or obstruction-free. However, a precise description of these properties does not exist and it is not unusual to find a definition that is ambiguous or even incorrect. We present a formal definition of the progress properties so that any confusion is removed. The formalisation also allows one to prove the widely believed presumption that wait-freedom is a special case of lock-freedom, which in turn is a special case of obstruction-freedom. <s> BIB003 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Non-blocking concurrent algorithms offer significant performance advantages, but are very difficult to construct and verify. In this paper, we describe our experience in using SPIN to check linearizability of non-blocking concurrent data-structure algorithms that manipulate dynamically allocated memory. In particular, this is the first work that describes a method for checking linearizability with non-fixed linearization points. <s> BIB004 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that 1) all executions of concurrent operations be serializable, and 2) the serialized executions be correct with respect to the sequential semantics. This paper describes a new method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. Our method avoids the often difficult task of determining linearization points in implementations, but can also take advantage of linearization points if they are given. The method exploits model checking of finite state systems specified as concurrent processes with shared variables. Partial order reduction is used to effectively reduce the search space. The approach is built into a toolset that supports a rich set of concurrent operators. The tool has been used to automatically check a variety of implementations of concurrent objects, including the first algorithms for the mailbox problem and scalable NonZero indicators. Our system was able to find all known and injected bugs in these implementations. <s> BIB005 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Concurrent programs are known to be complicated because synchronisation is required amongst the processes in order to ensure safety (nothing bad ever happens) and progress (something good eventually happens). Due to possible interference from other processes, a straightforward rearrangement of statements within a process can lead to dramatic changes in the behaviour of a program, even if the behaviour of the process executing in isolation is unaltered. Verifying concurrent programs using informal arguments are usually unconvincing, which makes formal methods a necessity. However, formal proofs can be challenging due to the complexity of concurrent programs. Furthermore, safety and progress properties are proved using fundamentally different techniques. Within the literature, safety has been given considerably more attention than progress. One method of formally verifying a concurrent program is to develop the program, then perform a post-hoc verification using one of the many available frameworks. However, this approach tends to be optimistic because the developed program seldom satisfies its requirements. When a proof becomes difficult, it can be unclear whether the proof technique or the program itself is at fault. Furthermore, following any modifications to program code, a verification may need to be repeated from the beginning. An alternative approach is to develop a program using a verify-while-develop paradigm. Here, one starts with a simple program together with the safety and progress requirements that need to be established. Each derivation step consists of a verification, followed by introduction of new program code motivated using the proofs themselves. Because a program is developed side-by-side with its proof, the completed program satisfies the original requirements. Our point of departure for this thesis is the Feijen and van Gasteren method for deriving concurrent programs, which uses the logic of Owicki and Gries. Although Feijen and van Gasteren derive several concurrent programs, because the Owicki-Gries logic does not include a logic of progress, their derivations only consider safety properties formally. Progress is considered post-hoc to the derivation using informal arguments. Furthermore, rules on how programs may be modified have not been presented, i.e., a program may be arbitrarily modified and hence unspecified behaviours may be introduced. In this thesis, we develop a framework for developing concurrent programs in the verify-while-develop paradigm. Our framework incorporates linear temporal logic, LTL, and hence both safety and progress properties may be given full consideration. We examine foundational aspects of progress by formalising minimal progress, weak fairness and strong fairness, which allow scheduler assumptions to be described. We formally define progress terms such as individual progress, individual deadlock, liveness, etc (which are properties of blocking programs) and wait-, lock-, and obstruction-freedom (which are properties of non-blocking programs). Then, we explore the inter-relationships between the various terms under the different fairness assumptions. Because LTL is known to be difficult to work with directly, we incorporate the logic of Owicki-Gries (for proving safety) and the leads-to relation from UNITY (for proving progress) within our framework. Following the nomenclature of Feijen and van Gasteren, our techniques are kept calculational, which aids derivation. We prove soundness of our framework by proving theorems that relate our techniques to the LTL definitions. Furthermore, we introduce several methods for proving progress using a well-founded relation, which keeps proofs of progress scalable. During program derivation, in order to ensure unspecified behaviour is not introduced, it is also important to verify a refinement, i.e., show that every behaviour of the final (more complex) program is a possible behaviour of the abstract representation. To facilitate this, we introduce the concept of an enforced property, which is a property that the program code does not satisfy, but is required of the final program. Enforced properties may be any LTL formula, and hence may represent both safety and progress requirements. We formalise stepwise refinement of programs with enforced properties, so that code is introduced in a manner that satisfies the enforced properties, yet refinement of the original program is guaranteed. We present derivations of several concurrent programs from the literature. <s> BIB006 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> A concurrent data-structure implementation is considered non-blocking if it meets one of three following liveness criteria: wait-freedom, lock-freedom, or obstruction-freedom. Developers of non-blocking algorithms aim to meet these criteria. However, to date their proofs for non-trivial algorithms have been only manual pencil-and-paper semi-formal proofs. This paper proposes the first fully automatic tool that allows developers to ensure that their algorithms are indeed non-blocking. Our tool uses rely-guarantee reasoning while overcoming the technical challenge of sound reasoning in the presence of interdependent liveness properties. <s> BIB007 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Lock-free implementations of data structures try to better utilize the capacity of modern multi-core computers, by increasing the potential to run in parallel. The resulting high degree of possible interference makes verification of these algorithms challenging. In this paper we describe a technique to verify lock-freedom, their main liveness property. The result complements our earlier work on proving linearizability, the standard safety property of lock-free algorithms. Our approach mechanizes both, the derivation of proof obligations as well as their verification for individual algorithms. It is based on an encoding of rely-guarantee reasoning using the temporal logic framework of the interactive theorem prover KIV. By means of a slightly improved version of Michael and Scott's lock-free queue algorithm we demonstrate how the most complex parts of the proofs can be reduced to relatively simple steps of symbolic execution. <s> BIB008 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> The advent of multicore processors as the standard computing platform will force major changes in software design. <s> BIB009 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Linearizability, the key correctness condition that most optimized concurrent object implementations comply with, imposes tight synchronization between the object concurrent operations. This tight synchronization usually comes with a performance and scalability price. Yet, these implementations are often employed in an environment where a more relaxed linearizability condition suffices, where strict linearizability is not a must. ::: ::: Here we provide a quantitative definition of limited non-determinism, a notion we call Quasi Linearizability. Roughly speaking an implementation of an object is quasi linearizable if each run of the implementation is at a bounded "distance" away from some linear run of the object. However, as we show the limited distance has to be relative to some operations but not all. ::: ::: Following the definition we provide examples of quasi concurrent implementations that out perform state of the art standard implementations due to the relaxed requirement. Finally we show that the Bitonic Counting Network non-deterministic behavior can be quantified using our Quasi Linearizable notion. <s> BIB010 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that: (1) All executions of concurrent operations are serializable, and (2) the serialized executions are correct with respect to the sequential semantics. In this work, we describe a method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. The method does not require that linearization points in the implementations be given, which is often difficult or impossible. However, the method takes advantage of linearization points if they are given. The method is based on refinement checking of finite-state systems specified as concurrent processes with shared variables. To tackle state space explosion, we develop and apply symmetry reduction, dynamic partial order reduction, and a combination of both for refinement checking. We have built the method into the PAT model checker, and used PAT to automatically check a variety of implementations of concurrent objects, including the first algorithm for scalable nonzero indicators. Our system is able to find all known and injected bugs in these implementations. <s> BIB011 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Implementations of concurrent objects should guarantee linearizability and a progress property such as wait-freedom, lock-freedom, obstruction-freedom, starvation-freedom, or deadlock-freedom. Conventional informal or semi-formal definitions of these progress properties describe conditions under which a method call is guaranteed to complete, but it is unclear how these definitions can be utilized to formally verify system software in a layered and modular way. ::: ::: In this paper, we propose a unified framework based on contextual refinements to show exactly how progress properties affect the behaviors of client programs. We give formal operational definitions of all common progress properties and prove that for linearizable objects, each progress property is equivalent to a specific type of contextual refinement that preserves termination. The equivalence ensures that verification of such a contextual refinement for a concurrent object guarantees both linearizability and the corresponding progress property. Contextual refinement also enables us to verify safety and liveness properties of client programs at a high abstraction level by soundly replacing concrete method implementations with abstract atomic operations. <s> BIB012 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> There is a trade-off between performance and correctness in implementing concurrent data structures. Better performance may be achieved at the expense of relaxing correctness, by redefining the semantics of data structures. We address such a redefinition of data structure semantics and present a systematic and formal framework for obtaining new data structures by quantitatively relaxing existing ones. We view a data structure as a sequential specification S containing all "legal" sequences over an alphabet of method calls. Relaxing the data structure corresponds to defining a distance from any sequence over the alphabet to the sequential specification: the k-relaxed sequential specification contains all sequences over the alphabet within distance k from the original specification. In contrast to other existing work, our relaxations are semantic (distance in terms of data structure states). As an instantiation of our framework, we present two simple yet generic relaxation schemes, called out-of-order and stuttering relaxation, along with several ways of computing distances. We show that the out-of-order relaxation, when further instantiated to stacks, queues, and priority queues, amounts to tolerating bounded out-of-order behavior, which cannot be captured by a purely syntactic relaxation (distance in terms of sequence manipulation, e.g. edit distance). We give concurrent implementations of relaxed data structures and demonstrate that bounded relaxations provide the means for trading correctness for performance in a controlled way. The relaxations are monotonic which further highlights the trade-off: increasing k increases the number of permitted sequences, which as we demonstrate can lead to better performance. Finally, since a relaxed stack or queue also implements a pool, we actually have new concurrent pool implementations that outperform the state-of-the-art ones. <s> BIB013 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification. <s> BIB014 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Linearizability is the de facto correctness criterion for concurrent data structures. Unfortunately, linearizability imposes a performance penalty which scales linearly in the number of contending threads. Quiescent consistency is an alternative criterion which guarantees that a concurrent data structure behaves correctly when accessed sequentially. Yet quiescent consistency says very little about executions that have any contention. ::: We define quantitative quiescent consistency (QQC), a relaxation of linearizability where the degree of relaxation is proportional to the degree of contention. When quiescent, no relaxation is allowed, and therefore QQC refines quiescent consistency, unlike other proposed relaxations of linearizability. We show that high performance counters and stacks designed to satisfy quiescent consistency continue to satisfy QQC. The precise assumptions under which QQC holds provides fresh insight on these structures. To demonstrate the robustness of QQC, we provide three natural characterizations and prove compositionality. <s> BIB015 </s> Verifying Linearisability: A Comparative Survey <s> Model <s> Correctness of concurrent objects is defined in terms of conditions that determine allowable relationships between histories of a concurrent object and those of the corresponding sequential object. Numerous correctness conditions have been proposed over the years, and more have been proposed recently as the algorithms implementing concurrent objects have been adapted to cope with multicore processors with relaxed memory architectures. ::: ::: We present a formal framework for defining correctness conditions for multicore architectures, covering both standard conditions for totally ordered memory and newer conditions for relaxed memory, which allows them to be expressed in uniform manner, simplifying comparison. Our framework distinguishes between order and commitment properties, which in turn enables a hierarchy of correctness conditions to be established. We consider the Total Store Order (TSO) memory model in detail, formalise known conditions for TSO using our framework, and develop sequentially consistent variations of these. We present a work-stealing deque for TSO memory that is not linearizable, but is correct with respect to these new conditions. Using our framework, we identify a new non-blocking compositional condition, fence consistency, which lies between known conditions for TSO, and aims to capture the intention of a programmer-specified fence. <s> BIB016
Checking. An important strand of research is model checking, which due to the finite nature of the state space searched is often not adequate for ensuring linearisability. This article has focused on verification methods, and hence, a detailed comparison of model checking methods has been elided. Like , we believe model checking can play a complementary role in verification, for example, allowing invariants to be model checked prior to verification to provide assurances that they can be proved correct. Methods for model checking linearisability may be found in BIB004 , BIB005 , and BIB011 ; a comparison of these techniques is beyond the scope of this survey. Progress Properties. In many applications, one must often consider the progress properties that an algorithm guarantees. Here, like safety, several different types of progress conditions have been identified such as starvation freedom, wait freedom, lock freedom, and obstruction freedom (see Shavit [2008, 2011] , BIB003 BIB006 , BIB008 , BIB012 , and BIB007 ). Progress properties are not the main focus of this article, and hence, discussion of methods for verifying them have been elided. Nevertheless, they remain an important property to consider when developing algorithms. Parameter Passing. A deficiency in linearisability theory is that it assumes data independence between libraries and clients, and hence only admits pass-by-value parameter passing mechanisms. Real-world systems, however, also allow data sharing between libraries and clients, for example, via pass-by-reference mechanisms. Here, ownership transfer between shared resources may occur. To this end, Yang [2012, 2013] have extended linearisability theory to cope with parameter sharing between concurrent objects and its clients. have further extended these results and defined parameterised linearisability that allows linearisable objects to be taken as parameters to form more complex linearisable objects. Relaxing Linearisability. The increasing popularity of multicore/multiprocess architectures, has led to an increasing necessity for highly optimised algorithms. Here, researchers are questioning whether linearisability is itself causing sequential bottlenecks, which in turn affects performance. Due to Amdahl's Law, it is known that if only 10% of a program's code remains sequential, then one can achieve at best a fivefold speedup on a 10-core machine, meaning at least half of the machine's capability is wasted BIB009 . As a result, BIB009 predicts future systems will trend towards more relaxed notions of correctness. To this end several conditions weaker than linearisability have been defined to allow greater flexibility in an implementation, for example, quasilinearisability BIB010 , k-linearisability BIB013 , and eventual consistency . Part of the problem is that linearisability insists on sequential consistency BIB002 , that is, that the order of events within a process is maintained. However, modern processors use local caches for efficiency, and hence, are not sequentially consistent. Instead, they only implement weak memory models that allow memory instructions to be reordered in a restricted manner BIB001 . BIB009 purports quiescent consistency, which only requires the real-time order of operation calls to be maintained when the calls are separated by a period of quiescence (which is a period without any pending operation invocations). Unlike linearisability, quiescent consistency does not imply sequential consistency, and hence, can be applied to weak memory models BIB014 . As quiescent consistency is a weak condition, more recent work has considered quantitative relaxations to bridge the gap between linearisability and quiescent consistency BIB015 . BIB016 have recently developed a framework for formally studying these correctness conditions, including those conditions developed for weak memory. Weakening correctness conditions, however, does not mean that the algorithms become easier to verify, and furthermore, methods for verifying linearisability can be ported to weaker conditions (e.g., see ). Therefore, techniques for simplifying linearisability proofs will not be in vain if in the future weaker conditions become the accepted standard.
Anomaly Detection and Mitigation at Internet Scale: A Survey <s> Introduction <s> Botnets become widespread in wired and wireless networks, whereas the relevant research is still in the initial stage. In this paper, a survey of botnets is provided. We first discuss fundamental concepts of botnets, including formation and exploitation, lifecycle, and two major kinds of topologies. Several related attacks, detection, tracing, and countermeasures, are then introduced, followed by recent research work and possible future challenges. <s> BIB001 </s> Anomaly Detection and Mitigation at Internet Scale: A Survey <s> Introduction <s> With large scale botnets emerging as one of the major current threats, the automatic detection of botnet traffic is of high importance for service providers and large campus network monitoring. Faced with high speed network connections, detecting botnets must be efficient and accurate. This paper proposes a novel approach for this task, where NetFlow related data is correlated and a host dependency model is leveraged for advanced data mining purposes. We extend the popular linkage analysis algorithm PageRank [27] with an additional clustering process in order to efficiently detect stealthy botnets using peer-to-peer communication infrastructures and not exhibiting large volumes of traffic. The key conceptual component in our approach is to analyze communication behavioral patterns and to infer potential botnet activities. <s> BIB002 </s> Anomaly Detection and Mitigation at Internet Scale: A Survey <s> Introduction <s> Botnets continue to be a significant problem on the Internet. Accordingly, a great deal of research has focused on methods for detecting and mitigating the effects of botnets. Two of the primary factors preventing the development of effective large-scale, wide-area botnet detection systems are seemingly contradictory. On the one hand, technical and administrative restrictions result in a general unavailability of raw network data that would facilitate botnet detection on a large scale. On the other hand, were this data available, real-time processing at that scale would be a formidable challenge. In contrast to raw network data, NetFlow data is widely available. However, NetFlow data imposes several challenges for performing accurate botnet detection. In this paper, we present Disclosure, a large-scale, wide-area botnet detection system that incorporates a combination of novel techniques to overcome the challenges imposed by the use of NetFlow data. In particular, we identify several groups of features that allow Disclosure to reliably distinguish C&C channels from benign traffic using NetFlow records (i.e., flow sizes, client access patterns, and temporal behavior). To reduce Disclosure's false positive rate, we incorporate a number of external reputation scores into our system's detection procedure. Finally, we provide an extensive evaluation of Disclosure over two large, real-world networks. Our evaluation demonstrates that Disclosure is able to perform real-time detection of botnet C&C channels over datasets on the order of billions of flows per day. <s> BIB003 </s> Anomaly Detection and Mitigation at Internet Scale: A Survey <s> Introduction <s> Distributed and coordinated attacks in computer networks are causing considerable economic losses worldwide in recent years. This is mainly due to the transition of attackers’ operational patterns towards a more sophisticated and more global behavior. This fact is leading current intrusion detection systems to be more likely to generate false alarms. In this context, this paper describes the design of a collaborative intrusion detection network (CIDN) that is capable of building and sharing collective knowledge about isolated alarms in order to efficiently and accurately detect distributed attacks. It has been also strengthened with a reputation mechanism aimed to improve the detection coverage by dropping false or bogus alarms that arise from malicious or misbehaving nodes. This model will enable a CIDN to detect malicious behaviors according to the trustworthiness of the alarm issuers, calculated from previous interactions with the system. Experimental results will finally demonstrate how entities are gradually isolated as their behavior worsens throughout the time. <s> BIB004
Network attacks pose a significant problem to the Internet landscape, which causes substantial financial losses. distinguish methods for attack detection according to their detection methodology, their locality and the dataset they use. The detection methodology is classed as either signature-based or anomalybased BIB001 . Obvious disadvantages of a signature-based approach are the need for up-to-date signatures and the restriction to detect only previously known attacks. The anomaly-based technique, on the other hand, searches for suspicious behavior and so it is also possible to detect new attacks. The locality is divided in host-based and network-based approaches . To enforce the host-based method, access to the devices of end-users is needed. This poses some problems, e.g. due to the bring your own device concept or due to end-users who do not make use of host-based techniques or do not keep them up-to-date. Even there is an increase of new platforms, such as mobile phones, where possibly no existing host-based approach is available yet. A network-based approach on the contrary provides both a global view and global administration, which makes an event correlation easier. Finally, detection may be performed on different datasets. As of today common datasets for the network-based methods are raw packet data, NetFlow data or system log files. A study performed by shows that Internet Service Provider (ISP) networks are considered to be key points for botnet mitigation which is one important aspect of attack detection and mitigation. In order to leverage this key position of ISPs in detection and mitigation of cyber-criminal activities we assume that a network-based anomaly detection system for detecting anomalous events has to be placed at an ISP node. So there is the possibility for correlating events for a better knowledge of isolated anomalous events for detecting distributed attacks, such as shown in BIB004 . Recently the network security scientific community discusses the advantages of network-based anomaly detection on base of NetFlow data . NetFlow is more feasible at Internet scale as e.g. raw packet data, because it is created by packet forwarding and preserves users' privacy. BIB002 and BIB003 propose a NetFlowbased detection mechanism for detecting botnets at large-scale networks. To sum up an important defense strategy against the underground economy is to implement flow-based anomaly detection algorithms at ISP nodes and to exchange status information with third parties. But will such an approach be adopted by ISPs? Do ISPs share and exchange status information with other providers on base of a standardized format? To get insight in real-world processes, structures and capabilities of IT companies and the computer networks they run, we set up a questionnaire of 56 questions, which was answered by 135 respondents from ISPs and other network operators. The paper is organized as follows. In Section 2 we describe the setup of our survey. The result set is analyzed and evaluated in Section 3. In Section 4 the paper is concluded and future research problems are discussed.
Anomaly Detection and Mitigation at Internet Scale: A Survey <s> Data and Tools <s> With large scale botnets emerging as one of the major current threats, the automatic detection of botnet traffic is of high importance for service providers and large campus network monitoring. Faced with high speed network connections, detecting botnets must be efficient and accurate. This paper proposes a novel approach for this task, where NetFlow related data is correlated and a host dependency model is leveraged for advanced data mining purposes. We extend the popular linkage analysis algorithm PageRank [27] with an additional clustering process in order to efficiently detect stealthy botnets using peer-to-peer communication infrastructures and not exhibiting large volumes of traffic. The key conceptual component in our approach is to analyze communication behavioral patterns and to infer potential botnet activities. <s> BIB001 </s> Anomaly Detection and Mitigation at Internet Scale: A Survey <s> Data and Tools <s> Botnets continue to be a significant problem on the Internet. Accordingly, a great deal of research has focused on methods for detecting and mitigating the effects of botnets. Two of the primary factors preventing the development of effective large-scale, wide-area botnet detection systems are seemingly contradictory. On the one hand, technical and administrative restrictions result in a general unavailability of raw network data that would facilitate botnet detection on a large scale. On the other hand, were this data available, real-time processing at that scale would be a formidable challenge. In contrast to raw network data, NetFlow data is widely available. However, NetFlow data imposes several challenges for performing accurate botnet detection. In this paper, we present Disclosure, a large-scale, wide-area botnet detection system that incorporates a combination of novel techniques to overcome the challenges imposed by the use of NetFlow data. In particular, we identify several groups of features that allow Disclosure to reliably distinguish C&C channels from benign traffic using NetFlow records (i.e., flow sizes, client access patterns, and temporal behavior). To reduce Disclosure's false positive rate, we incorporate a number of external reputation scores into our system's detection procedure. Finally, we provide an extensive evaluation of Disclosure over two large, real-world networks. Our evaluation demonstrates that Disclosure is able to perform real-time detection of botnet C&C channels over datasets on the order of billions of flows per day. <s> BIB002
As stated in the introduction the locality at an ISP node offers great possibilities for real-time detecting and correlating anomalous events. In this Section we provide the results of our survey with respect to acquired data and tools to detect attacks. In Section 1 we discussed different kinds of data sources for anomaly detection. We are interested, if flow data is available for anomaly detection. Once again we consider this to be important to assess the feasibility of current scientific anomalous detection approaches, especially the promising algorithms based on network flow data BIB001 BIB002 . Flow data contains statistical network information about a unidirectional data stream between two network devices in a certain time frame (e.g. source/destination IP address, source/destination port etc.). There are different network flow formats. The common ones are NetFlow developed by Cisco, its successor IPFIX (Internet Protocol Flow Information Export, ), and sFlow . Figure 3a shows the results to the question, which kind of data the companies currently use for attack detection. The number of responses is 31. The majority of 61% actually use SNMP data, a protocol for exchanging management information between network devices. SNMP is just like NetFlow a passive measurement technology, however, NetFlow provides the advantage of containing more detailed information. So, also shown in Figure 3a , SNMP data is closely followed by NetFlow data and other server logs, namely in each case 58%. Additional flow formats like sFlow and IPFIX are used by 29% and 32% of the attendees. On the other hand only a small minority of 10% make use of raw packet data for the anomaly detection. The next questions address the technical ability to collect the three common flow data formats. The outcome is illustrated in Figure 3b . Concerning NetFlow (version 5 or version 9) 33 of the 47 participants and hence 70% answering this question provide this possibility. The availability to collect sFlow is given by 24 of 43 responding participants, i.e. 56%. However, only 4 of 36 replying attendees (corresponding to 11%) are able to collect IPFIX data with the current company's infrastructure. But IPFIX is much newer than NetFlow, which perhaps explains this fact. Finally we aim at comparing flow-based algorithms to the well-known deep packet inspection. We first asked for the technical ability to perform a deep packet inspection, i.e. to collect raw packet data. Although 73% of the 49 responding participants have the ability to do that, only 50% of them think that this is a feasible approach. Their main argument against collecting raw data is the huge amount of network traffic to process. Furthermore 56% of them think, that raw packet data endangers the customers' privacy and requires too much human resources. Further mentioned disadvantages of deep packet inspection are the financial investment (44%) and a prohibition by legal or regulatory requirements (44%). To our mind flow data is privacy friendly. To support this claim we asked the participants if collecting and processing NetFlow data is superior in protecting the customers' privacy to collecting and processing raw packet data. 63% of the 41 respondents agree and 37% disagree to this statement. In summary, flow-based data sources, such as NetFlow, are common, available and privacy-friendly data sources at network nodes. They thus present techniques for detecting anomalous events in networks. These results support our assumption that there is a demand for network-based anomaly detection systems based on NetFlow data.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> This paper presents a machine learning approach to optimizing a quadrupedal trot gait for forward speed. Given a parameterized walk designed for a specific robot, we propose using a form of policy gradient reinforcement learning to automatically search the set of possible parameters with the goal of finding the fastest possible walk. We implement and test our approach on a commercially available quadrupedal robot platform, namely the Sony Aibo robot. After about three hours of learning, all on the physical robots and with no human intervention other than to change the batteries, the robots achieved a gait faster than any previously known gait known for the Aibo, significantly outperforming a variety of existing hand-coded and learned solutions. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Domestic service robots have long been a staple of science fiction and commercial visions of the future. Until recently, we have only been able to speculate about what the experience of using such a device might be. Current domestic service robots, introduced as consumer products, allow us to make this vision a reality.This paper presents ethnographic research on the actual use of these products, to provide a grounded understanding of how design can influence human-robot interaction in the home. We used an ecological approach to broadly explore the use of this technology in this context, and to determine how an autonomous, mobile robot might "fit" into such a space. We offer initial implications for the design of these products: first, the way the technology is introduced is critical; second, the use of the technology becomes social; and third, that ideally, homes and domestic service robots must adapt to each other. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> In this paper, a new approach is developed for solving the problem of mobile robot path planning in an unknown dynamic environment based on Q-learning. Q-learning algorithms have been used widely for solving real world problems, especially in robotics since it has been proved to give reliable and efficient solutions due to its simple and well developed theory. However, most of the researchers who tried to use Q-learning for solving the mobile robot navigation problem dealt with static environments; they avoided using it for dynamic environments because it is a more complex problem that has infinite number of states. This great number of states makes the training for the intelligent agent very difficult. In this paper, the Q-learning algorithm was applied for solving the mobile robot navigation in dynamic environment problem by limiting the number of states based on a new definition for the states space. This has the effect of reducing the size of the Q-table and hence, increasing the speed of the navigation algorithm. The conducted experimental simulation scenarios indicate the strength of the new proposed approach for mobile robot navigation in dynamic environment. The results show that the new approach has a high Hit rate and that the robot succeeded to reach its target in a collision free path in most cases which is the most desirable feature in any navigation algorithm. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean/corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a partially observed guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed "Actor-Mimic", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> HighlightsWe propose a conceptual model for customer orientation in intelligent logistics.We provide a set of structured requirements for a successful customer-oriented intelligent logistics system.We illustrate the model and structured requirements in the context using example from our current research. The role of logistics in effective supply chain management is increasingly critical, and researchers and practitioners have recently focused their attention in designing more intelligent systems to address today's challenges. In this paper, we focus on one such challenge concerning improving the role of the customer in logistics operations. In particular, we identify specific developments in the systems governing core logistics operations, which will enhance the customer experience. This paper proposes a conceptual model for customer orientation in intelligent logistics and describes a number of specific developments the authors are involved in. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use. <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises. <s> BIB018 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. <s> BIB019 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either. <s> BIB020 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. <s> BIB021 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input. <s> BIB022 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet. <s> BIB023 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Label estimation is an important component in an unsupervised person re-identification (re-ID) system. This paper focuses on cross-camera label estimation, which can be subsequently used in feature learning to learn robust re-ID models. Specifically, we propose to construct a graph for samples in each camera, and then graph matching scheme is introduced for cross-camera labeling association. While labels directly output from existing graph matching methods may be noisy and inaccurate due to significant cross-camera variations, this paper proposes a dynamic graph matching (DGM) method. DGM iteratively updates the image graph and the label estimation process by learning a better feature space with intermediate estimated labels. DGM is advantageous in two aspects: 1) the accuracy of estimated labels is improved significantly with the iterations; 2) DGM is robust to noisy initial training data. Extensive experiments conducted on three benchmarks including the large-scale MARS dataset show that DGM yields competitive performance to fully supervised baselines, and outperforms competing unsupervised learning methods. <s> BIB024 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Reinforcement learning is an appealing approach for allowing robots to learn new tasks. Relevant literature reveals a plethora of methods, but at the same time makes clear the lack of implementations for dealing with real life challenges. Current expectations raise the demand for adaptable robots. We argue that, by employing model-based reinforcement learning, the—now limited—adaptability characteristics of robotic systems can be expanded. Also, model-based reinforcement learning exhibits advantages that makes it more applicable to real life use-cases compared to model-free methods. Thus, in this survey, model-based methods that have been applied in robotics are covered. We categorize them based on the derivation of an optimal policy, the definition of the returns function, the type of the transition model and the learned task. Finally, we discuss the applicability of model-based reinforcement learning approaches in new applications, taking into consideration the state of the art in both algorithms and hardware. <s> BIB025 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where twp agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of ``analogy making,'' or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven. <s> BIB026 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL) agents. The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology, while the sampling-based planners provide an approximate map of the space of possible configurations of the robot from which collision-free trajectories feasible for the RL agents can be identified. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. These evaluations included both simulated environments and on-robot tests. Our results show improvement in navigation task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 meters long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 meters without violating the task constraints in an environment 63 million times larger than used in training. <s> BIB027 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach. <s> BIB028 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. <s> BIB029 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> The aim of this paper is to give an overview of domain adaptation and transfer learning with a specific view on visual applications. After a general motivation, we first position domain adaptation in the larger transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and the heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we overview the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes. Finally, we conclude the paper with a section where we relate domain adaptation to other machine learning solutions. <s> BIB030 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance. <s> BIB031 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Detecting small objects is notoriously challenging due to their low resolution and noisy representation. Existing object detection pipelines usually detect small objects through learning representations of all the objects at multiple scales. However, the performance gain of such ad hoc architectures is usually limited to pay off the computational cost. In this work, we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to super-resolved ones, achieving similar characteristics as large objects and thus more discriminative for detection. For this purpose, we propose a new Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones. Specifically, its generator learns to transfer perceived poor representations of the small objects to super-resolved ones that are similar enough to real large objects to fool a competing discriminator. Meanwhile its discriminator competes with the generator to identify the generated representation and imposes an additional perceptual requirement - generated representations of small objects must be beneficial for detection purpose - on the generator. Extensive evaluations on the challenging Tsinghua-Tencent 100K [45] and the Caltech [9] benchmark well demonstrate the superiority of Perceptual GAN in detecting small objects, including traffic signs and pedestrians, over well-established state-of-the-arts. <s> BIB032 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects. <s> BIB033 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We formulate tracking as an online decision-making process, where a tracking agent must follow an object despite ambiguous image frames and a limited computational budget. Crucially, the agent must decide where to look in the upcoming frames, when to reinitialize because it believes the target has been lost, and when to update its appearance model for the tracked object. Such decisions are typically made heuristically. Instead, we propose to learn an optimal decision-making policy by formulating tracking as a partially observable decision-making process (POMDP). We learn policies with deep reinforcement learning algorithms that need supervision (a reward signal) only when the track has gone awry. We demonstrate that sparse rewards allow us to quickly train on massive datasets, several orders of magnitude more than past work. Interestingly, by treating the data source of Internet videos as unlimited streams, we both learn and evaluate our trackers in a single, unified computational stream. <s> BIB034 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. <s> BIB035 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies. <s> BIB036 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. ::: We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. ::: The supplementary video can be accessed at the following link: this https URL <s> BIB037 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. ::: Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. ::: Videos available at this https URL . <s> BIB038 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep $Q$-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field. <s> BIB039 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identity-discriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID. <s> BIB040 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> As a new classification platform, deep learning has recently received increasing attention from researchers and has been successfully applied to many domains. In some domains, like bioinformatics and robotics, it is very difficult to construct a large-scale well-annotated dataset due to the expense of data acquisition and costly annotation, which limits its development. Transfer learning relaxes the hypothesis that the training data must be independent and identically distributed (i.i.d.) with the test data, which motivates us to use transfer learning to solve the problem of insufficient training data. This survey focuses on reviewing the current researches of transfer learning by using deep neural network and its applications. We defined deep transfer learning, category and review the recent research works based on the techniques used in deep transfer learning. <s> BIB041 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Monocular depth estimation using learning-based approaches has become promising in recent years. However, most monocular depth estimators either need to rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic data can resolve most of these issues but introduces the problem of domain bias. This is the inability to apply a model trained on synthetic data to real-world scenarios. With advances in image style transfer and its connections with domain adaptation (Maximum Mean Discrepancy), we take advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data. Experimental results indicate the efficacy of our approach compared to contemporary state-of-the-art techniques. <s> BIB042 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We present DeblurGAN, an end-to-end learned method for motion deblurring. The learning is based on a conditional GAN and the content loss. DeblurGAN achieves state-of-the art performance both in the structural similarity measure and visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem - object detection on (de-)blurred images. The method is 5 times faster than the closest competitor - Deep-Deblur [25]. We also introduce a novel method for generating synthetic motion blurred images from sharp ones, allowing realistic dataset augmentation. The model, code and the dataset are available at https://github.com/KupynOrest/DeblurGAN <s> BIB043 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image. <s> BIB044 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively. <s> BIB045 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering. <s> BIB046 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> In recent years, deep neural nets have triumphed over many computer vision problems, including semantic segmentation, which is a critical task in emerging autonomous driving and medical image diagnostics applications. In general, training deep neural nets requires a humongous amount of labeled data, which is laborious and costly to collect and annotate. Recent advances in computer graphics shed light on utilizing photo-realistic synthetic data with computer generated annotations to train neural nets. Nevertheless, the domain mismatch between real images and synthetic ones is the major challenge against harnessing the generated data and labels. In this paper, we propose a principled way to conduct structured domain adaption for semantic segmentation, i.e., integrating GAN into the FCN framework to mitigate the gap between source and target domains. Specifically, we learn a conditional generator to transform features of synthetic images to real-image like features, and a discriminator to distinguish them. For each training batch, the conditional generator and the discriminator compete against each other so that the generator learns to produce real-image like features to fool the discriminator; afterwards, the FCN parameters are updated to accommodate the changes of GAN. In experiments, without using labels of real image data, our method significantly outperforms the baselines as well as state-of-the-art methods by 12% ~ 20% mean IoU on the Cityscapes dataset. <s> BIB047 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled data is extremely hard and tedious. In this work, we focus on adapting the representations learned by segmentation networks across synthetic and real domains. Contrary to previous approaches that use a simple adversarial objective or superpixel information to aid the process, we propose an approach based on Generative Adversarial Networks (GANs) that brings the embeddings closer in the learned feature space. To showcase the generality and scalability of our approach, we show that we can achieve state of the art results on two challenging scenarios of synthetic to real domain adaptation. Additional exploratory experiments show that our approach: (1) generalizes to unseen domains and (2) results in improved alignment of source and target distributions. <s> BIB048 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a "learning via translation" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets. <s> BIB049 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills. <s> BIB050 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg <s> BIB051 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset. <s> BIB052 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> We study active object tracking, where a tracker takes visual observations (i.e., frame sequences) as inputs and produces the corresponding camera control signals as outputs (e.g., move forward, turn left, etc.). Conventional methods tackle tracking and camera control tasks separately, and the resulting system is difficult to tune jointly. Such an approach also requires significant human efforts for image labeling and expensive trial-and-error system tuning in real-world. To address these issues, we propose, in this paper, an end-to-end solution via deep reinforcement learning. A ConvNet-LSTM function approximator is adopted for the direct frame-to-action prediction. We further propose environment augmentation techniques and a customized reward function which are crucial for successful training. The tracker trained in simulators (ViZDoom, Unreal Engine) demonstrates good generalization behaviors in the case of unseen object moving paths, unseen object appearances, unseen backgrounds, and distracting objects. The system is robust and can restore tracking after occasional lost of the target being tracked. We also find that the tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios. We demonstrate successful examples of such transfer, via experiments over the VOT dataset and the deployment of a real-world robot using the proposed active tracker trained in simulation. <s> BIB053 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour. Understanding the behaviour of the machines powered by artificial intelligence that increasingly mediate our social, cultural, economic and political interactions is essential to our ability to control the actions of these intelligent machines, reap their benefits and minimize their harms. <s> BIB054 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Intelligent transportation systems (ITSs) will be a major component of tomorrow's smart cities. However, realizing the true potential of ITSs requires ultralow latency and reliable data analytics solutions that combine, in real time, a heterogeneous mix of data stemming from the ITS network and its environment. Such data analytics capabilities cannot be provided by conventional cloud-centric data processing techniques whose communication and computing latency can be high. Instead, edge-centric solutions that are tailored to the unique ITS environment must be developed. In this article, an edge analytics architecture for ITSs is introduced in which data is processed at the vehicle or roadside smart sensor level to overcome the ITS's latency and reliability challenges. With a higher capability of passengers' mobile devices and intravehicle processors, such a distributed edge computing architecture leverages deep-learning techniques for reliable mobile sensing in ITSs. In this context, the ITS mobile edge analytics challenges pertaining to heterogeneous data, autonomous control, vehicular platoon control, and cyberphysical security are investigated. Then, different deep-learning solutions for such challenges are revealed. The discussed deep-learning solutions enable ITS edge analytics by endowing the ITS devices with powerful computer vision and signal processing functions. Preliminary results show that the introduced edge analytics architecture, coupled with the power of deep-learning algorithms, provides a reliable, secure, and truly smart transportation environment. <s> BIB055 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient – that is, it may simply be too slow – to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning. <s> BIB056 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Generative adversarial networks (GANs) are a hot research topic recently. GANs have been widely studied since 2014, and a large number of algorithms have been proposed. However, there is few comprehensive study explaining the connections among different GANs variants, and how they have evolved. In this paper, we attempt to provide a review on various GANs methods from the perspectives of algorithms, theory, and applications. Firstly, the motivations, mathematical representations, and structure of most GANs algorithms are introduced in details. Furthermore, GANs have been combined with other machine learning algorithms for specific applications, such as semi-supervised learning, transfer learning, and reinforcement learning. This paper compares the commonalities and differences of these GANs methods. Secondly, theoretical issues related to GANs are investigated. Thirdly, typical applications of GANs in image processing and computer vision, natural language processing, music, speech and audio, medical field, and data science are illustrated. Finally, the future open research problems for GANs are pointed out. <s> BIB057 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> <s> Autonomous systems possess the features of inferring their own ego-motion, autonomously understanding their surroundings, and planning trajectories. With the applications of deep learning and reinforcement learning, the perception and decision-making abilities of autonomous systems are being efficiently addressed, and many new learning-based algorithms have surfaced with respect to autonomous perception and decision-making. In this review, we focus on the applications of learning-based approaches in perception and decision-making in autonomous systems, which is different from previous reviews that discussed traditional methods. First, we delineate the existing classical simultaneous localization and mapping (SLAM) solutions and review the environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional SLAM frameworks. Second, we briefly summarize the existing motion planning techniques, such as path planning and trajectory planning methods, and discuss the navigation methods based on reinforcement learning. Finally, we examine the several challenges and promising directions discussed and concluded in related research for future works in the era of computer science, automatic control, and robotics. <s> BIB058
Understanding the behavior of AI agents is very important to promote its technology BIB054 . With the rise of deep learning (DL) algorithms, the upgrading of hardwares and the availability of big data, AI technology has been making huge progress these years BIB009 . Autonomous systems powered by AI, including unmanned vehicles, robotic manipulators and drones, etc, have been widely used in various industries and daily lives, such as intelligent transportation BIB055 , intelligent logistics BIB015 and service robots BIB002 , etc. However, due to the limitations of current computer perception and decision-making technologies in terms of accuracy and transferability, autonomous systems still have much room to be improved for complex and intelligent tasks via technological development. Due to the ability of DL to capture high-dimensional data features BIB009 , DL-based algorithms are widely used in the perception and decision-making tasks of autonomous systems. There are a number of typical perception and decision-making related tasks for autonomous systems, such as image super-resolution BIB006 , BIB016 , image deblurring/dehazing/rain removal BIB010 , BIB017 , BIB004 , semantic segmentation BIB011 , BIB023 , depth estimation BIB007 , BIB012 , person re-identification BIB024 , BIB040 , robot navigation BIB005 , BIB025 , and robotic manipulation BIB026 , BIB027 , etc. However, most DL-based models have good accuracy and poor transferability, i.e., they are usually effective in the testing dataset with the same data distribution or task. When a well-trained model is transferred to other datasets or real-world tasks, the accuracy usually declines drastically, which means that the transferability is poor, and thus the transferability has to be taken into account for practical applications BIB041 . This issue results in the fact that the current vision perception and decision-making methods cannot be used directly in actual autonomous systems. Transfer learning, especially domain adaptation, improves the transferability of models between different domains, i.e., welltrained models can achieve a good accuracy when applied to other testing domains. Recently, since adversarial learning, like generative adversarial networks (GANs), has shown its promising results in image generation, a number of GANs-based methods have been proposed and achieved breakthroughs in the above computer vision tasks BIB028 , BIB042 , BIB029 , BIB043 , etc. In the field of AI, GANs have become more and more important due to their powerful generation and domain adaptation capabilities BIB057 . GANs have attracted increasing attention, since they were proposed by Goodfellow et al. BIB008 in 2014. GAN is a generative model that introduces adversarial learning between the generator and the discriminator, in which the generator creates data to deceive the discriminator, while the arXiv:2003.12948v1 [cs.AI] 29 Mar 2020 discriminator distinguishes whether its input comes from real data or generated ones. The generator and discriminator are iteratively optimized in the game, and finally reach the Nash equilibrium BIB018 . In particular, when considering a well-trained model for different data sets or real scenes, GANs can be used for domain transfer tasks by virtue of their ability to capture high-frequency features to generate sharp images BIB030 . Although some learning-based models mainly focus on the aspect of accuracy BIB006 , BIB011 , BIB007 , GANs have demonstrated good accuracy and transferability for various complex image fields in autonomous systems and other related fields, such as text-to-image generation BIB019 , BIB044 , image style transfer BIB028 , BIB020 , super-resolution BIB029 , image deblurring BIB043 , image rain removal BIB045 , BIB031 , object detection BIB032 , BIB046 , semantic segmentation BIB028 , BIB047 , BIB048 , person re-identification BIB049 , and video generation BIB021 , etc. Meanwhile, as a powerful tool for decision-making and control, reinforcement learning (RL) has been extensively studied in recent years, because it is suitable for decisionmaking tasks in complex environments BIB003 , BIB001 . However, RL is limited in action space and sample space, and it generally works in discrete situation only. Moreover, when the input data are high-dimensional such as images, sounds and videos, it is difficult to solve the problem with RL. With the help of deep neural networks, deep RL (DRL), which combines the high-dimensional perceptual ability of DL with the decision-making ability of RL, has achieved promising results recently in various fields of application, such as obstacle avoidance BIB033 , BIB050 , robot navigation BIB051 , BIB052 , robotic manipulation BIB013 , BIB022 , video target tracking BIB034 , BIB053 , games playing BIB014 , [53] , and drug testing BIB035 , , etc. However, DRL tends to require a large number of trials and needs to specify a reward function to define a certain task BIB056 . The former is time-consuming and the latter is significantly difficult when training from scratch. In order to tackle these problems, the idea of learn to learn, called metalearning emerged . Compared with DRL, meta-learning makes the learning methods more transferable and efficient by keeping the previous experience to guide the learning of new tasks across domains. Therefore, meta-learning performs well in a variety of problems especially the environments of lacking data, such as image recognition , classification BIB036 , robot navigation BIB037 and robotic control BIB038 , etc. With the development of DL, learning-based perception and decision-making algorithms for autonomous systems have become a hot research topic. There are some reviews on autonomous systems, Tang et al. BIB058 introduced the applications of learning-based methods in perception and decisionmaking for autonomous systems. Gui et al. BIB057 gave a detailed overview of various GANs methods from the perspective of algorithms, theories and applications. Arulkumaran BIB039 detailed the core algorithms of DRL and the advantages of RL for visual understanding tasks. Unlike previous surveys, we focus on reviewing learning-based approaches in the perception and decision-making tasks of autonomous systems from the perspectives of accuracy and transferability. The organization of this review is arranged as follows. Section II introduces transfer learning and its special case in domain adaption. Then the basic concepts of adversarial learning, RL and meta-learning are presented. In Section III, we survey some recent developments by exploring various learning-based approaches in autonomous systems, taking into account both accuracy and transferability. In Section IV, we summarize some trends and challenges for autonomous systems. Conclusions are given in Section V. We summarize the abbreviations in this review in Table I .
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters--in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB™, significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). ::: As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. ::: Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> In pattern recognition and computer vision, one is often faced with scenarios where the training data used to learn a model have different distribution from the data on which the model is applied. Regardless of the cause, any distributional change that occurs after learning a classifier can degrade its performance at test time. Domain adaptation tries to mitigate this degradation. In this article, we provide a survey of domain adaptation methods for visual recognition. We discuss the merits and drawbacks of existing domain adaptation approaches and identify promising avenues for research in this rapidly evolving field. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. ::: ::: The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. ::: ::: We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and/or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> One of the main research problems in heterogeneous transfer learning is to determine whether a given source domain is effective in transferring knowledge to a target domain, and then to determine how much of the knowledge should be transferred from a source domain to a target domain. The main objective of this paper is to solve this problem by evaluating the relatedness among given domains through transferred weights. We propose a novel method to learn such transferred weights with the aid of co-occurrence data, which contain the same set of instances but in different feature spaces. Because instances with the same category should have similar features, our method is to compute their principal components in each feature space such that co-occurrence data can be rerepresented by these principal components. The principal component coefficients from different feature spaces for the same instance in the co-occurrence data have the same order of significance for describing the category information. By using these principal component coefficients, the Markov Chain Monte Carlo method is employed to construct a directed cyclic network where each node is a domain and each edge weight is the conditional dependence from one domain to another domain. Here, the edge weight of the network can be employed as the transferred weight from a source domain to a target domain. The weight values can be taken as a prior for setting parameters in the existing heterogeneous transfer learning methods to control the amount of knowledge transferred from a source domain to a target domain. The experimental results on synthetic and real-world data sets are reported to illustrate the effectiveness of the proposed method that can capture strong or weak relations among feature spaces, and enhance the learning performance of heterogeneous transfer learning. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> The aim of this paper is to give an overview of domain adaptation and transfer learning with a specific view on visual applications. After a general motivation, we first position domain adaptation in the larger transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and the heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we overview the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes. Finally, we conclude the paper with a section where we relate domain adaptation to other machine learning solutions. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> The latest proposed Broad Learning System (BLS) demonstrates an efficient and effective learning capability in many machine learning problems. In this paper, we apply the BLS to address transductive transfer learning problems, where the training (source) and test (target) data are drawn from the different but related distributions, which is a.k.a domain adaptation. We aim at learning from source data a well performing classifier on a different (but related) target data. A unified domain adaptation framework based on the BLS is developed for improving its transfer learning capability without loss of the computational efficiency. Two algorithms including BLS based source domain adaptation (BLS-SDA) and BLS based target domain adaptation (BLS-TDA) are proposed under this framework. Experiments on benchmark datasets show that our approach outperforms several existing domain adaptation methods while maintains high efficiency. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Drastic variations in illumination across surveillance cameras make the person re-identification problem extremely challenging. Current large scale re-identification datasets have a significant number of training subjects, but lack diversity in lighting conditions. As a result, a trained model requires fine-tuning to become effective under an unseen illumination condition. To alleviate this problem, we introduce a new synthetic dataset that contains hundreds of illumination conditions. Specifically, we use 100 virtual humans illuminated with multiple HDR environment maps which accurately model realistic indoor and outdoor lighting. To achieve better accuracy in unseen illumination conditions we propose a novel domain adaptation technique that takes advantage of our synthetic data and performs fine-tuning in a completely unsupervised way. Our approach yields significantly higher accuracy than semi-supervised and unsupervised state-of-the-art methods, and is very competitive with supervised techniques. <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Transfer learning <s> Unsupervised Domain Adaptation (UDA) aims to transfer domain knowledge from existing well-defined tasks to new ones where labels are unavailable. In the real-world applications, as the domain (task) discrepancies are usually uncontrollable, it is significantly motivated to match the feature distributions even if the domain discrepancies are disparate. Additionally, as no label is available in the target domain, how to successfully adapt the classifier from the source to the target domain still remains an open question. In this paper, we propose the Re-weighted Adversarial Adaptation Network (RAAN) to reduce the feature distribution divergence and adapt the classifier when domain discrepancies are disparate. Specifically, to alleviate the need of common supports in matching the feature distribution, we choose to minimize optimal transport (OT) based Earth-Mover (EM) distance and reformulate it to a minimax objective function. Utilizing this, RAAN can be trained in an end-to-end and adversarial manner. To further adapt the classifier, we propose to match the label distribution and embed it into the adversarial training. Finally, after extensive evaluation of our method using UDA datasets of varying difficulty, RAAN achieved the state-of-the-art results and outperformed other methods by a large margin when the domain shifts are disparate. <s> BIB018
Transfer learning. Transfer learning is a research topic that aims to investigate the improvement of learners from one target domain trained with more easily obtained data from source domains BIB009 . In other words, the domains, tasks, and distributions used in training and testing could be different. Therefore, transfer learning saves a great deal of time and cost in labeling data when encountering various scenarios of machine learning applications. According to different situations between domains, source tasks and target tasks, transfer learning can be categorized into three subsettings: inductive transfer learning, transductive transfer learning and unsupervised transfer learning BIB001 . In this review, we will put emphasis on transductive transfer learning problem, where the training and testing tasks are drawn from the different but related distributions, that is, domain adaption BIB016 . Domain adaptation. As a special case of transfer learning, the source and target domains of domain adaptation belong to a single task and share the same feature spaces, while the feature distributions are different BIB001 . Domain adaptation leverages labeled data in the source domain to learn a classifier for the target domain, where the target domain is either fully unlabeled (unsupervised domain adaptation) or has few labeled samples (semi-supervised domain adaptation) BIB005 . Domain adaptation is promising for the transferability of perception tasks of autonomous systems, because it is efficient to reduce the domain shift among different data sets, arising from synthetic and real images BIB010 , different weather conditions BIB012 , different lighting conditions BIB017 , and different seasons BIB013 , etc. Domain adaptation for visual applications includes shallow and deep methods BIB014 . There are some results studying shallow domain adaptive methods, which mainly include homogeneous domain adaptation and heterogeneous domain adaptation, according to whether the source data and target data have the same representation BIB003 , BIB002 , BIB011 , etc. Readers who want to learn more about shallow domain adaptation methods are referred to BIB014 , BIB006 and the references therein. In this review, we mainly focus on deep domain adaptation methods, including traditional DL BIB003 , BIB004 , BIB007 and adversarial learning BIB015 , BIB008 , BIB018 .
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Adversarial learning <s> Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Adversarial learning <s> This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Adversarial learning <s> Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Adversarial learning <s> Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Adversarial learning <s> Generative adversarial networks (GANs) are a hot research topic recently. GANs have been widely studied since 2014, and a large number of algorithms have been proposed. However, there is few comprehensive study explaining the connections among different GANs variants, and how they have evolved. In this paper, we attempt to provide a review on various GANs methods from the perspectives of algorithms, theory, and applications. Firstly, the motivations, mathematical representations, and structure of most GANs algorithms are introduced in details. Furthermore, GANs have been combined with other machine learning algorithms for specific applications, such as semi-supervised learning, transfer learning, and reinforcement learning. This paper compares the commonalities and differences of these GANs methods. Secondly, theoretical issues related to GANs are investigated. Thirdly, typical applications of GANs in image processing and computer vision, natural language processing, music, speech and audio, medical field, and data science are illustrated. Finally, the future open research problems for GANs are pointed out. <s> BIB005
Generative adversarial networks. As a powerful learningbased method for computer vision tasks, adversarial learning not only improves the accuracy, but also helps improve the transferability of the model by reduce the differences between the training and testing domain distributions BIB003 . generative adversarial networks (GANs) are typical models that use adversarial ideas to generate tasks. GANs, as the name implies, are architectures that use adversarial learning methods for generative tasks BIB005 . The framework includes two models, a generator G and a discriminator D, as shown in Fig. 1 . G captures the prior noise distribution p z (z) to generate fake data G(z), and D outputs a single scalar to characterize whether the sample comes from training data x or generated data G(z). G and D play against each other, promote each other, and finally reach the Nash equilibrium BIB002 . G and D focus on a two-player minimax game with the value function V (G, D): where V (G, D) is a binary cross-entropy function, which aims to let D classify real or fake samples. In Eq. (1), D tries to maximize its output, G tries to minimize its output, and the game ends at a saddle point BIB002 . Conditional generative adversarial networks. In the original generative model, since the prior comes from the noise distribution p z (z), the mode of the generated data cannot be controlled BIB002 . Mirza et al. BIB001 then proposed conditional generative adversarial networks (cGANs), in which some extra information y is fed to the generator and discriminator in the model, such that the data generation process can be guided, as shown in Fig. 1 . Note that y can be class labels or any other kind of auxiliary information. Compared with (1), the objective function of cGAN is as follows: (2) Cycle-consistent adversarial network. Unlike models tailored for specific tasks, like GANs and cGANs, cycleconsistent adversarial network (CycleGAN) uses a unified framework for various image tasks, which makes the framework simple and effective BIB004 . Zhu et al. BIB004 proposed Cy-cleGAN to learn image translation between the source domain X and the target domain Y with unpaired training examples the total number of samples in the source and target domains, as shown in Fig. 1 . The framework includes two generators G : X → Y and F : Y → X, and two discriminators D X and D Y , where D X distinguishes between images x and translated images F (y), similarly, D Y distinguishes between images y and translated images G(x). The output of the mapping G iŝ y = G(x), and the output of the mapping F isx = F (y). They express the adversarial loss for the generator G : X → Y and the discriminator D Y as follows: (3) They similarly define the adversarial loss for the generator F : Y → X and the discriminator D X as L GAN (F, D X , Y, X). Based on the adversarial loss, they proposed a cycle consistency loss to encourage F (G(x)) ≈ x and G(F (y)) ≈ y. The cycle consistency loss is expressed as: The full objective of CycleGAN is: where λ is a hyperparameter used to control the relative importance of the adversarial loss and the cycle consistency loss. As a powerful generative model, many variants of GANs were presented by modifying loss functions or network architectures and they were used for various computer vision Conditional generative adversarial networks BIB001 ; (c). Cycle-Consistent adversarial networks BIB004 . tasks. In this review, we mainly focus on the problem of scene transfer and task transfer in autonomous systems using GANs, including image style transfer, image super-resolution, image denoising/dehazing/rain removal, semantic segmentation, depth estimation, and person re-identification.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> In this paper, we propose a novel policy iteration method, called dynamic policy programming (DPP), to estimate the optimal policy in the infinite-horizon Markov decision processes. We prove the finite-iteration and asymptotic l\infty-norm performance-loss bounds for DPP in the presence of approximation/estimation error. The bounds are expressed in terms of the l\infty-norm of the average accumulated error as opposed to the l\infty-norm of the error in the case of the standard approximate value iteration (AVI) and the approximate policy iteration (API). This suggests that DPP can achieve a better performance than AVI and API since it averages out the simulation noise caused by Monte-Carlo sampling throughout the learning process. We examine this theoretical results numerically by com- paring the performance of the approximate variants of DPP with existing reinforcement learning (RL) methods on different problem domains. Our results show that, in all cases, DPP-based algorithms outperform other RL methods by a wide margin. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a partially observed guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuo-motor policies for real robotic systems without relying entirely on large real-world robot datasets. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep $Q$-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. ::: We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. ::: The supplementary video can be accessed at the following link: this https URL <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Landing an unmanned aerial vehicle (UAV) on a ground marker is an open problem despite the effort of the research community. Previous attempts mostly focused on the analysis of hand-crafted geometric features and the use of external sensors in order to allow the vehicle to approach the land-pad. In this article, we propose a method based on deep reinforcement learning that only requires low-resolution images taken from a down-looking camera in order to identify the position of the marker and land the UAV on it. The proposed approach is based on a hierarchy of Deep Q-Networks (DQNs) used as high-level control policy for the navigation toward the marker. We implemented different technical solutions, such as the combination of vanilla and double DQNs, and a partitioned buffer replay. Using domain randomization we trained the vehicle on uniform textures and we tested it on a large variety of simulated and real-world environments. The overall performance is comparable with a state-of-the-art algorithm and human pilots. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Recent model-free reinforcement learning algorithms have proposed incorporating learned dynamics models as a source of additional data with the intention of reducing sample complexity. Such methods hold the promise of incorporating imagined data coupled with a notion of model uncertainty to accelerate the learning of continuous control tasks. Unfortunately, they rely on heuristics that limit usage of the dynamics model. We present model-based value expansion, which controls for uncertainty in the model by only allowing imagination to fixed depth. By enabling wider use of learned dynamics models within a model-free reinforcement learning algorithm, we improve value estimation, which, in turn, reduces the sample complexity of learning. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Reinforcement learning <s> Skilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors amid challenging cases of clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after only a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.cs.princeton.edu <s> BIB016
Reinforcement learning (RL) is the problem faced by an agent that learns behavior through trial-and-error interactions in a dynamic environment BIB001 . In the RL framework, an agent interacts with the environment to choose the action in the state of a given environment in order to maximize its long-term reward BIB002 . When the given environment changes or training data are insufficient, chances are that RL methods need to train the model starting from the scratch, which is inefficient and inaccurate. RL algorithms can be classified into two kinds, model-based and model-free algorithms BIB013 . Model-based RL is to learn a transition model that allows the environment to be simulated without directly interacting with the environment BIB010 . Model-based methods include guided policy search (GPS) BIB006 , and model-based value expansion (MBVE) BIB014 , etc. However, model-free RL uses the experience of states and environments directly to generate actions BIB003 . Modelfree methods include deep Q network (DQN) BIB005 , deep deterministic policy gradient (DDPG) method BIB007 , dynamic policy programming (DPP) method BIB004 and asynchronous advantage actor-critic (A3C) method BIB011 , etc. Model-free algorithms can learn complex tasks but tend to be inefficient in sampling, while model-based algorithms are more efficient in sampling, but usually have difficulty in scaling to complicated tasks BIB015 . With further research on the application of RL methods, several problems occur that model-based algorithms are no longer applicable to more complex tasks, while model-free algorithms need more training data. Therefore, RL methods are limited when generalizing to different tasks and domains BIB015 . In this review, we mainly focus on several modifications on RL methods, such as amending the network structure BIB008 , BIB012 and optimizing the way of training BIB009 , BIB016 , in order to equip the model to learn the new tasks accurately in the same domain or transferably across domains, which means that the transferability can be enhanced.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L/sub 1/ norm in the target space approximates the "semantic" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue/AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> During the last decade, the data sizes have grown faster than the speed of processors. In this context, the capabilities of statistical machine learning methods is limited by the computing time rather than the sample size. A more precise analysis uncovers qualitatively different tradeoffs for the case of small-scale and large-scale learning problems. The large-scale case involves the computational complexity of the underlying optimization algorithm in non-trivial ways. Unlikely optimization algorithms such as stochastic gradient descent show amazing performance for large-scale problems. In particular, second order stochastic gradient and averaged stochastic gradient are asymptotically efficient after a single pass on the training set. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> The metric learning problem is concerned with learning a distance function tuned to a particular task, and has been shown to be useful when used in conjunction with nearest-neighbor methods and other techniques that rely on distances or similarities. This survey presents an overview of existing research in metric learning, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points. A goal of the survey is to present as unified as possible a framework under which existing research on metric learning can be cast. The first part of the survey focuses on linear metric learning approaches, mainly concentrating on the class of Mahalanobis distance learning methods. We then discuss nonlinear metric learning approaches, focusing on the connections between the nonlinear and linear approaches. Finally, we discuss extensions of metric learning, as well as applications to a variety of problems in computer vision, text analysis, program analysis, and multimedia. Full text available at: http://dx.doi.org/10.1561/2200000019 <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We describe a class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. ::: Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. ::: Videos available at this https URL . <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our experiments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Meta-learning, or learning to learn, is the science of systematically observing how different machine learning approaches perform on a wide range of learning tasks, and then learning from this experience, or meta-data, to learn new tasks much faster than otherwise possible. Not only does this dramatically speed up and improve the design of machine learning pipelines or neural architectures, it also allows us to replace hand-engineered algorithms with novel approaches learned in a data-driven way. In this chapter, we provide an overview of the state of the art in this fascinating and continuously evolving field. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Gradient-based meta-learning has been shown to be expressive enough to approximate any learning algorithm. While previous such methods have been successful in meta-learning tasks, they resort to simple gradient descent during meta-testing. Our primary contribution is the {\em MT-net}, which enables the meta-learner to learn on each layer's activation space a subspace that the task-specific learner performs gradient descent on. Additionally, a task-specific learner of an {\em MT-net} performs gradient descent with respect to a meta-learned distance metric, which warps the activation space to be more sensitive to task identity. We demonstrate that the dimension of this learned subspace reflects the complexity of the task-specific learner's adaptation task, and also that our model is less sensitive to the choice of initial learning rates than previous gradient-based meta-learning methods. Our method achieves state-of-the-art or comparable performance on few-shot classification and regression tasks. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-Shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this paper, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimized is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications and theories, are also proposed to provide insights for future research. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Meta-learning <s> Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks. <s> BIB015
Meta-learning, or "learning to learn", "learn how to learn", i.e., using previous knowledge and experience to guide the learning of new tasks in order to equip the model to learn across domains BIB011 . The goal of meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations BIB006 . Similar to transfer learning, meta-learning improves the learner's generalization ability in multi-task. However, unlike transfer learning, metalearning focuses on the sampling of both data and tasks. Therefore, meta-learning models are trained by being exposed to a large number of tasks when encountering new tasks, which qualifies them to learn new cross-domain tasks from few data settings. The methods for meta-learning can be divided into three categories: recurrent models, metric learning and learning optimizers BIB007 . Recurrent models are trained by various methods, such as long short-term memory (LSTM) BIB001 and temporal convolution network (TCN) BIB008 , to acquire the dataset sequentially and then process new inputs from the task. LSTM BIB001 processes data sequentially and figures out its own learning strategy from the scratch. Moreover, TCN BIB008 uses convolution structures to capture long-range temporal patterns, whose framework is simpler and more accurate than LSTM. Metric learning is a way to calculate the similarity between two targets from different tasks. For a specific task, the input target is classified into a target category with large similarity judging from a metric distance function BIB004 . It has been widely used for few-shot learning , during which, the data belong to a large number of categories, some categories are unknown at the stage of training and the training samples of each category are particularly small BIB014 . These characteristics are consistent with the characteristics of metalearning. There are four sorts of typical networks proposed for metric learning, siamese network BIB002 , prototypical network BIB007 , matching network BIB005 and relation network BIB012 . Learning an optimizer, that is, one meta-learner learns how to update the learner so that the learner can learn the task efficiently BIB013 . This method has been extensively studied to obtain better optimization results of neural networks. RL BIB015 and imitation learning BIB009 can be combined with metalearning to learn new policies effectively or adapt to new tasks quickly. Model-agnostic meta-learning (MAML) BIB006 is a representative and popular meta-learning optimization method, which uses stochastic gradient descent (SGD) BIB003 to update. It adapts quickly to new tasks due to no assumptions made about the form of the model and no extra parameters introduced for meta-learning. MAML includes a basemodel learner and a meta-learner. Each base-model learner learns a specific task and the meta-learner learns the average performance θ of multiple specific tasks as the initialization parameters of one new task BIB006 . From Fig. 2 , the model is represented by a parametrized function f θ with the parameter θ. When adapting to a new task T i that is drawn from a distribution over tasks p(T ), the model's parameter is updated to θ i . θ i is computed by one or more gradient descent updates on task T i . Moreover, L T i represents the loss function for task T i and the step size α is regarded as a hyperparameter. For example, we consider one gradient update on task T i , The model parameters are trained by optimizing for the performance of a parametrized function f θ i with parameter θ i , corresponding to the following problem: ). (7) Fig. 2 : Diagram of the MAML algorithm, which optimizes for a representation θ that can quickly adapt to new tasks BIB006 . When extending MAML to the imitation learning setting, the model's input, o t , is the agent's observation sampled at time t, whereas the output a t is the agent's action taken at time t. The demonstration trajectory can be represented as τ := {o 1 , a 1 , . . . o T , a T }, using a mean squared error loss as a function of policy parameters φ as follows: During meta-training, several demonstrations are sampled as training tasks. The demonstrations help to compute θ i for each task T i using gradient descent with Eq. (6) and to compute the gradient of the meta-objective by using Eq. (7) with the loss in Eq. (8) . During meta-testing, we consider using only a single demonstration as a new task T , updating with SGD. Therefore, the model is updated to acquire a policy for that task BIB010 .
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg/kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg/kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L/kg), clearance (1.54 L/kg/d), and area under the plasma concentration-time curve (AUC; 143 [ng•d]/mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d]/mL), and clearance (1.43 L/kg/d). After ... <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> Both generative adversarial networks (GAN) in unsupervised learning and actor-critic methods in reinforcement learning (RL) have gained a reputation for being difficult to optimize. Practitioners in both fields have amassed a large number of strategies to mitigate these instabilities and improve training. Here we show that GANs can be viewed as actor-critic methods in an environment where the actor cannot affect the reward. We review the strategies for stabilizing training for each class of models, both those that generalize between the two and those that are particular to that model. We also review a number of extensions to GANs and RL algorithms with even more complicated information flow. We hope that by highlighting this formal connection we will encourage both GAN and RL communities to develop general, scalable, and stable algorithms for multilevel optimization with deep networks, and to draw inspiration across communities. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are typically not explicitly modeled. In this paper, we propose a new algorithm that can infer the latent structure of expert demonstrations in an unsupervised way. Our method, built on top of Generative Adversarial Imitation Learning, can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations. In the driving domain, we show that a model learned from human demonstrations is able to both accurately reproduce a variety of behaviors and accurately anticipate human actions using raw visual inputs. Compared with various baselines, our method can better capture the latent structure underlying expert demonstrations, often recovering semantically meaningful factors of variation in the data. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> Advances in deep generative networks have led to impressive results in recent years. Nevertheless, such models can often waste their capacity on the minutiae of datasets, presumably due to weak inductive biases in their decoders. This is where graphics engines may come in handy since they abstract away low-level details and represent images as high-level programs. Current methods that combine deep learning and renderers are limited by hand-crafted likelihood or distance functions, a need for large amounts of supervision, or difficulties in scaling their inference algorithms to richer datasets. To mitigate these issues, we present SPIRAL, an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images. The goal of this agent is to fool a discriminator network that distinguishes between real and rendered data, trained with a distributed reinforcement learning setup without any supervision. A surprising finding is that using the discriminator's output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering. To the best of our knowledge, this is the first demonstration of an end-to-end, unsupervised and adversarial inverse graphics agent on challenging real world (MNIST, Omniglot, CelebA) and synthetic 3D datasets. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> This paper considers metalearning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We present a remarkably simple metalearning algorithm called Reptile, which learns a parameter initialization that can be fine-tuned quickly on a new task. Reptile works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. Unlike MAML, which also learns an initialization, Reptile doesn't require differentiating through the optimization process, making it more suitable for optimization problems where many update steps are required. We show that Reptile performs well on some well-established benchmarks for few-shot classification. We provide some theoretical analysis aimed at understanding why Reptile works. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> Exploration is a fundamental challenge in reinforcement learning (RL). Many of the current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we explore how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm -- model agnostic exploration with structured noise (MAESN) -- to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent's experience. Because this loss is highly flexible in its ability to take into account the agent's history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG's learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. The relationship between adversarial learning, RL and meta-learning <s> We present RL-GAN-Net, where a reinforcement learning (RL) agent provides fast and robust control of a generative adversarial network (GAN). Our framework is applied to point cloud shape completion that converts noisy, partial point cloud data into a high-fidelity completed shape by controlling the GAN. While a GAN is unstable and hard to train, we circumvent the problem by (1) training the GAN on the latent space representation whose dimension is reduced compared to the raw point cloud input and (2) using an RL agent to find the correct input to the GAN to generate the latent space representation of the shape that best fits the current input of incomplete point cloud. The suggested pipeline robustly completes point cloud with large missing regions. To the best of our knowledge, this is the first attempt to train an RL agent to control the GAN, which effectively learns the highly nonlinear mapping from the input noise of the GAN to the latent space of point cloud. The RL agent replaces the need for complex optimization and consequently makes our technique real time. Additionally, we demonstrate that our pipelines can be used to enhance the classification accuracy of point cloud with missing data. <s> BIB012
RL [109] is a method to describe and solve the problem that agents learn policies to achieve the maximum returns or specific goals in the interactions with the environment. Pfau et al. BIB003 discussed the connection between GANs and actor-critic (AC) methods. AC is a kind of RL methods that learns the policy and value function simultaneously. To be specific, the actor network chooses the proper action in a continuous action space, while the critic network implements a single-step-update, which improves the learning efficiency BIB001 . Pfau et al. argued that GANs can be viewed as an AC approach in an environment where actors cannot influence rewards. RL and GANs are integrated for various tasks, such as real-time point cloud shape completion BIB012 and image synthesis BIB008 , etc. In the field of RL, using the cost function to understand the underlying behavior is called inverse reinforcement learning (IRL) BIB002 . The policy distribution in the IRL can be regarded as the data distribution of the generator in GANs, and the reward in the IRL can be regarded as the discriminator in GANs. However, IRL learns the cost function to explain expert behavior, but cannot directly tell the learner how to take action, which leads to high running costs. Ho et al. BIB004 proposed generative adversarial imitation learning (GAIL), combining GANs with imitation learning, which employs GANs to fit the states and actions distributions that define expert behavior. GAIL significantly improves the performance in large-scale and high-dimensional planning problems BIB006 . Introducing meta-learning methods to RL methods is called meta-RL , which equips the model to solve new problems more efficiently by utilizing the experience from prior tasks. A meta-RL model is trained over a distribution of different but related tasks, and during testing, it is able to learn to solve a new task quickly by developing a new RL algorithm BIB005 . There are several meta-RL algorithms that utilize the past experience to achieve good performance on new tasks, for example, MAML BIB007 and Reptile BIB009 are typical methods on updating model parameters and optimizing model weights; MAESN (model agnostic exploration with structured noise) BIB010 can learn structured action noise from prior experience; EPG (evolved policy gradient) BIB011 defines the policy gradient loss function as a temporal convolution over the past experience. Moreover, when dealing with unlabeled training data, unsupervised meta-RL methods effectively acquire accelerated RL procedures without the need for manual task design, such as collecting data and labeling data, etc. Therefore, both supervised and unsupervised meta-RL can transfer previous tasks information to new tasks across domains.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer — rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> This paper describes a new framework for processing images by example, called “image analogies.” The framework involves two stages: a design phase, in which a pair of images, with one image purported to be a “filtered” version of the other, is presented as “training data”; and an application phase, in which the learned filter is applied to some new target image in order to create an “analogous” filtered result. Image analogies are based on a simple multi-scale autoregression, inspired primarily by recent results in texture synthesis. By choosing different types of source image pairs as input, the framework supports a wide variety of “image filter” effects, including traditional image filters, such as blurring or embossing; improved texture synthesis, in which some textures are synthesized with higher quality than by previous approaches; super-resolution, in which a higher-resolution image is inferred from a low-resolution source; texture transfer, in which images are “texturized” with some arbitrary source texture; artistic filters, in which various drawing and painting styles are synthesized based on scanned real-world examples; and texture-by-numbers, in which realistic scenes, composed of a variety of textures, are created using a simple painting interface. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations when given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter. Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce XGAN ("Cross-GAN"), a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Drastic variations in illumination across surveillance cameras make the person re-identification problem extremely challenging. Current large scale re-identification datasets have a significant number of training subjects, but lack diversity in lighting conditions. As a result, a trained model requires fine-tuning to become effective under an unseen illumination condition. To alleviate this problem, we introduce a new synthetic dataset that contains hundreds of illumination conditions. Specifically, we use 100 virtual humans illuminated with multiple HDR environment maps which accurately model realistic indoor and outdoor lighting. To achieve better accuracy in unseen illumination conditions we propose a novel domain adaptation technique that takes advantage of our synthetic data and performs fine-tuning in a completely unsupervised way. Our approach yields significantly higher accuracy than semi-supervised and unsupervised state-of-the-art methods, and is very competitive with supervised techniques. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> This paper presents the first attempt at stereoscopic neural style transfer, which responds to the emerging demand for 3D movies or AR/VR. We start with a careful examination of applying existing monocular style transfer methods to left and right views of stereoscopic images separately. This reveals that the original disparity consistency cannot be well preserved in the final stylization results, which causes 3D fatigue to the viewers. To address this issue, we incorporate a new disparity loss into the widely adopted style loss function by enforcing the bidirectional disparity constraint in non-occluded regions. For a practical real-time solution, we propose the first feed-forward network by jointly training a stylization sub-network and a disparity sub-network, and integrate them in a feature level middle domain. Our disparity sub-network is also the first end-to-end network for simultaneous bidirectional disparity and occlusion mask estimation. Finally, our network is effectively extended to stereoscopic videos, by considering both temporal coherence and disparity consistency. We will show that the proposed method clearly outperforms the baseline algorithms both quantitatively and qualitatively. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Unsupervised image translation, which aims in translating two independent sets of images, is challenging in discovering the correct correspondences without paired data. Existing works build upon Generative Adversarial Network (GAN) such that the distribution of the translated images are indistinguishable from the distribution of the target set. However, such set-level constraints cannot learn the instance-level correspondences (e.g. aligned semantic parts in object configuration task). This limitation often results in false positives (e.g. geometric or semantic artifacts), and further leads to mode collapse problem. To address the above issues, we propose a novel framework for instance-level image translation by Deep Attention GAN (DA-GAN). Such a design enables DA-GAN to decompose the task of translating samples from two sets into translating instances in a highly-structured latent space. Specifically, we jointly learn a deep attention encoder, and the instancelevel correspondences could be consequently discovered through attending on the learned instance pairs. Therefore, the constraints could be exploited on both set-level and instance-level. Comparisons against several state-ofthe- arts demonstrate the superiority of our approach, and the broad application capability, e.g, pose morphing, data augmentation, etc., pushes the margin of domain translation problem. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). However, previous methods often fail in challenging cases, in particular, when an image has multiple target instances and a translation task involves significant changes in shape, e.g., translating pants to skirts in fashion images. To tackle the issues, we propose a novel method, coined instance-aware GAN (InstaGAN), that incorporates the instance information (e.g., object segmentation masks) and improves multi-instance transfiguration. The proposed method translates both an image and the corresponding set of instance attributes while maintaining the permutation invariance property of the instances. To this end, we introduce a context preserving loss that encourages the network to learn the identity function outside of target instances. We also propose a sequential mini-batch inference/training technique that handles multiple instances with a limited GPU memory and enhances the network to generalize better for multiple instances. Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases. Code and results are available in https://github.com/sangwoomo/instagan <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Deep convolutional neural networks have performed remarkably well on many Computer Vision tasks. However, these networks are heavily reliant on big data to avoid overfitting. Overfitting refers to the phenomenon when a network learns a function with very high variance such as to perfectly model the training data. Unfortunately, many application domains do not have access to big data, such as medical image analysis. This survey focuses on Data Augmentation, a data-space solution to the problem of limited data. Data Augmentation encompasses a suite of techniques that enhance the size and quality of training datasets such that better Deep Learning models can be built using them. The image augmentation algorithms discussed in this survey include geometric transformations, color space augmentations, kernel filters, mixing images, random erasing, feature space augmentation, adversarial training, generative adversarial networks, neural style transfer, and meta-learning. The application of augmentation methods based on GANs are heavily covered in this survey. In addition to augmentation techniques, this paper will briefly discuss other characteristics of Data Augmentation such as test-time augmentation, resolution impact, final dataset size, and curriculum learning. This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing Data Augmentation. Readers will understand how Data Augmentation can improve the performance of their models and expand limited datasets to take advantage of the capabilities of big data. <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> A. Image style transfer <s> Unpaired Image-to-image Translation is a new rising and challenging vision problem that aims to learn a mapping between unaligned image pairs in diverse domains. Recent advances in this field like MUNIT and DRIT mainly focus on disentangling content and style/attribute from a given image first, then directly adopting the global style to guide the model to synthesize new domain images. However, this kind of approaches severely incurs contradiction if the target domain images are content-rich with multiple discrepant objects. In this paper, we present a simple yet effective instance-aware image-to-image translation approach (INIT), which employs the fine-grained local (instance) and global styles to the target image spatially. The proposed INIT exhibits three import advantages: (1) the instance-level objective loss can help learn a more accurate reconstruction and incorporate diverse attributes of objects; (2) the styles used for target domain of local/global areas are from corresponding spatial regions in source domain, which intuitively is a more reasonable mapping; (3) the joint training process can benefit both fine and coarse granularity and incorporates instance information to improve the quality of global translation. We also collect a large-scale benchmark for the new instance-level translation task. We observe that our synthetic images can even benefit real-world vision tasks like generic object detection. <s> BIB018
Images can be well transferred between different styles, which is conducive to the perception and decision-making algorithms of autonomous systems applicable to various scenarios. For autonomous systems, it is inevitably to face the problem of the image style transfer arising from seasonal conversion BIB007 , varying weather conditions BIB008 or day conversion BIB013 . In particular, it is more challenging and interesting to consider transferring training data for night to day, rainy to sunny, or winter to summer, since most autonomous systems have a better ability to perceive under good lighting or weather condition than some harsh environments. The task of image style transfer is to change the content of the source domain image to the target domain one, while ensuring that the style is consistent with the target domain BIB003 . In addition, style transfer, as an interesting data augmentation strategy, can extend the range of lighting and weather changes, and thus further improving the transferability of the model BIB017 . As well, using the image style transfer algorithm to achieve the transfer from the simulated environment to the real-world is very useful for semantic segmentation, robot navigation and grasping tasks, because training directly in real-world may lead to higher experimental costs due to some possible damages to hardwares BIB017 . Traditional methods to achieve style transfer mainly rely on non-parametric techniques to manipulate the pixels of the image like BIB001 , BIB002 , etc. Although traditional methods have achieved good results in style transfer, they are limited to using only low-level features of the image for texture transfer, but not semantic transfer BIB004 . Traditional DL-based style transfer. Convolutional neural networks (CNNs) have been used in image style transfer, since they have achieved fantastic results in numerous visual perception areas. Gatys et al. BIB003 first proposed to utilize CNNs (pre-trained VGG-Networks) to separate content and style from natural images, and then combined the content of one image with the style of another into a new image to achieve an artistic style transfer. This work opened up a new viewpoint for style transfer using deep neural networks (DNNs). To reduce the computational burden, Johnson et al. BIB005 proposed to use the perceptual loss instead of the perpixel loss for image style transfer tasks. This method achieves similar results to BIB003 , while it is three orders of magnitude faster. In addition, Chen et al. BIB014 presented a stereo neural style conversion that can be used in emerging technologies such as 3D movies or VR. This method seems promising for improving the perception accuracy of autonomous systems in unmanned scenes, because the transferred results contain more stereo information in the scene. GANs-based style transfer. Traditional CNNs-based methods minimize the Euclidean distance between predicted pixels and ground truth pixels, which may create single and blurry results BIB006 . GANs can be used for image style transfer, which can produce more diverse and realistic images BIB006 . Isola et al. BIB006 used cGANs to image style transfer, and the experimental results showed that cGANs (with L1 loss) not only has satisfactory results for style transfer task, but also can produce reasonable results for a wide variety of problems like semantic segmentation and background removal. However, this method requires paired image samples, which is often difficult to implement in practice. By considering this issue, Zhu et al. BIB009 proposed CycleGAN to learn image translation between domains with unpaired examples, as shown in Fig. 3 . As mentioned in Section II, the framework of CycleGAN includes two generators and two discriminators to achieve mutual translation between the source and the target domain. The main insight of CycleGAN is to preserve the key attributes between the input and the translated image by using a cycle consistency loss. At the same time, DiscoGAN BIB010 and DualGAN BIB011 were presented to adopt similar cycle consistency ideas to achieve an image transfer task across domains. In order to improve CycleGAN from the aspect of semantic information alignment at the feature-level, Hoffman et al. BIB007 proposed CyCADA by combining domain adaptation and cycle-consistent adversarial, which uniformly considers feature-level and pixel-level adversarial domain adaptation and cycle-consistency constraints. CyCADA has achieved satisfactory results in some challenging tasks, like from synthesis to BIB012 proposed XGAN, which is an unsupervised semantic style transfer task for many-to-many mapping. Royer et al. used domain adaptation techniques to constrain the shared embedding and proposed a semantic consistency loss as a form of self-supervision to act on two domain translations. This method has a good generalization effect when there is a large domain shift between the two domains. In addition, in order to obtain a fine-grained local information of images, Shen et al. BIB018 proposed instance-aware imageto-image translation approach (INIT), which applies instance and global styles to the target image spatially, as shown in Fig. 3 . Similarly, the image style transfer was considered at the instance level in BIB015 , BIB016 .
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> This paper presents the use of B-splines as a tool in various digital signal processing applications. The theory of B-splines is briefly reviewed, followed by discussions on B-spline interpolation and B-spline filtering. Computer implementation using both an efficient software viewpoint and a hardware method are discussed. Finally, experimental results are presented for illustrative purposes in two-dimensional image format. Applications to image and signal processing include interpolation, smoothing, filtering, enlargement, and reduction. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints on the interpolation kernel, it can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines. A one-dimensional interpolation function is derived in this paper. A separable extension of this algorithm to two dimensions is applied to image data. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Super-Resolution reconstruction produces one or a set of high-resolution images from a sequence of low-resolution frames. This article reviews a variety of Super-Resolution methods proposed in the last 20 years, and provides some insight into, and a summary of, our recent contributions to the general Super-Resolution problem. In the process, a detailed study of several very important aspects of Super-Resolution, often ignored in the literature, is presented. Spe- cifically, we discuss robustness, treatment of color, and dynamic operation modes. Novel methods for addressing these issues are accompanied by experimental results on simulated and real data. Finally, some future challenges in Super-Resolution are outlined and discussed. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 47-57, 2004; Published online in Wiley InterScience (www.interscience.wiley. com). DOI 10.1002/ima.20007 <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> In this paper, we propose an image super-resolution approach using a novel generic image prior - gradient profile prior, which is a parametric prior describing the shape and the sharpness of the image gradients. Using the gradient profile prior learned from a large number of natural images, we can provide a constraint on image gradients when we estimate a hi-resolution image from a low-resolution image. With this simple but very effective prior, we are able to produce state-of-the-art results. The reconstructed hi-resolution image is sharp while has rare ringing or jaggy artifacts. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> As a successful deep model applied in image super-resolution (SR), the Super-Resolution Convolutional Neural Network (SRCNN) [1, 2] has demonstrated superior performance to the previous hand-crafted models either in speed and restoration quality. However, the high computational cost still hinders it from practical usage that demands real-time performance (24 fps). In this paper, we aim at accelerating the current SRCNN, and propose a compact hourglass-shape CNN structure for faster and better SR. We re-design the SRCNN structure mainly in three aspects. First, we introduce a deconvolution layer at the end of the network, then the mapping is learned directly from the original low-resolution image (without interpolation) to the high-resolution one. Second, we reformulate the mapping layer by shrinking the input feature dimension before mapping and expanding back afterwards. Third, we adopt smaller filter sizes but more mapping layers. The proposed model achieves a speed up of more than 40 times with even superior restoration quality. Further, we present the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance. A corresponding transfer strategy is also proposed for fast training and testing across different upscaling factors. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. ::: We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL . <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> We consider the single image super-resolution problem in a more general case that the low-/high-resolution pairs and the down-sampling process are unavailable. Different from traditional super-resolution formulation, the low-resolution input is further degraded by noises and blurring. This complicated setting makes supervised learning and accurate kernel estimation impossible. To solve this problem, we resort to unsupervised learning without paired data, inspired by the recent successful image-to-image translation applications. With generative adversarial networks (GAN) as the basic component, we propose a Cycle-in-Cycle network structure to tackle the problem within three steps. First, the noisy and blurry input is mapped to a noise-free low-resolution space. Then the intermediate image is up-sampled with a pre-trained deep model. Finally, we fine-tune the two modules in an end-to-end manner to get the high-resolution output. Experiments on NTIRE2018 datasets demonstrate that the proposed unsupervised method achieves comparable results as the state-of-the-art supervised models. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Recent research on image super-resolution (SR) has shown that the use of perceptual losses such as feature-space loss functions and adversarial training can greatly improve the perceptual quality of the resulting SR output. In this paper, we extend the use of these perceptual-focused approaches for image SR to that of video SR. We design a 15-block residual neural network, VSRResNet, which is pre-trained on a the traditional mean -squared -error (MSE) loss and later fine-tuned with a feature-space loss function in an adversarial setting. We show that our proposed system, VSRRes-FeatGAN, produces super-resolved frames of much higher perceptual quality than those provided by the MSE-based model. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> While deep neural networks (DNN) based single image super-resolution (SISR) methods are rapidly gaining popularity, they are mainly designed for the widely-used bicubic degradation, and there still remains the fundamental challenge for them to super-resolve low-resolution (LR) image with arbitrary blur kernels. In the meanwhile, plug-and-play image restoration has been recognized with high flexibility due to its modular structure for easy plug-in of denoiser priors. In this paper, we propose a principled formulation and framework by extending bicubic degradation based deep SISR with the help of plug-and-play framework to handle LR images with arbitrary blur kernels. Specifically, we design a new SISR degradation model so as to take advantage of existing blind deblurring methods for blur kernel estimation. To optimize the new degradation induced energy function, we then derive a plug-and-play algorithm via variable splitting technique, which allows us to plug any super-resolver prior rather than the denoiser prior as a modular part. Quantitative and qualitative evaluations on synthetic and real LR images demonstrate that the proposed deep plug-and-play super-resolution framework is flexible and effective to deal with blurry LR images. <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Stereo image pairs can be used to improve the performance of super-resolution (SR) since additional information is provided from a second viewpoint. However, it is challenging to incorporate this information for SR since disparities between stereo images vary significantly. In this paper, we propose a parallax-attention stereo superresolution network (PASSRnet) to integrate the information from a stereo image pair for SR. Specifically, we introduce a parallax-attention mechanism with a global receptive field along the epipolar line to handle different stereo images with large disparity variations. We also propose a new and the largest dataset for stereo image SR (namely, Flickr1024). Extensive experiments demonstrate that the parallax-attention mechanism can capture correspondence between stereo images to improve SR performance with a small computational and memory cost. Comparative results show that our PASSRnet achieves the state-of-the-art performance on the Middlebury, KITTI 2012 and KITTI 2015 datasets. <s> BIB018 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> We tackle the problem of retrieving high-resolution (HR) texture maps of objects that are captured from multiple view points. In the multi-view case, model-based super-resolution (SR) methods have been recently proved to recover high quality texture maps. On the other hand, the advent of deep learning-based methods has already a significant impact on the problem of video and image SR. Yet, a deep learning-based approach to super-resolve the appearance of 3D objects is still missing. The main limitation of exploiting the power of deep learning techniques in the multi-view case is the lack of data. We introduce a 3D appearance SR (3DASR) dataset based on the existing ETH3D [42], SyB3R [31], MiddleBury, and our Collection of 3D scenes from TUM [21], Fountain [51] and Relief [53]. We provide the high- and low-resolution texture maps, the 3D geometric model, images and projection matrices. We exploit the power of 2D learning-based SR methods and design networks suitable for the 3D multi-view case. We incorporate the geometric information by introducing normal maps and further improve the learning process. Experimental results demonstrate that our proposed networks successfully incorporate the 3D geometric information and super-resolve the texture maps. <s> BIB019 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Recently, many convolutional neural networks for single image super-resolution (SISR) have been proposed, which focus on reconstructing the high-resolution images in terms of objective distortion measures. However, the networks trained with objective loss functions generally fail to reconstruct the realistic fine textures and details that are essential for better perceptual quality. Recovering the realistic details remains a challenging problem, and only a few works have been proposed which aim at increasing the perceptual quality by generating enhanced textures. However, the generated fake details often make undesirable artifacts and the overall image looks somewhat unnatural. Therefore, in this paper, we present a new approach to reconstructing realistic super-resolved images with high perceptual quality, while maintaining the naturalness of the result. In particular, we focus on the domain prior properties of SISR problem. Specifically, we define the naturalness prior in the low-level domain and constrain the output image in the natural manifold, which eventually generates more natural and realistic images. Our results show better naturalness compared to the recent super-resolution algorithms including perception-oriented ones. <s> BIB020 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Single image super-resolution (SISR) aims to recover a high-resolution image from a given low-resolution version of it. Video super-resolution (VSR) targets a series of given images, aiming to fuse them to create a higher resolution outcome. Although SISR and VSR seem to have a lot in common, most SISR algorithms do not have a simple and direct extension to VSR. VSR is considered a more challenging inverse problem, mainly due to its reliance on a sub-pixel accurate motion-estimation, which has no parallel in SISR. Another complication is the dynamics of the video, often addressed by simply generating a single frame instead of a complete output sequence. In this paper, we suggest a simple and robust super-resolution framework that can be applied to single images and easily extended to video. Our work relies on the observation that denoising of images and videos is well-managed and very effectively treated by a variety of methods. We exploit the plug-and-play-prior framework and the regularization-by-denoising (RED) approach that extends it, and show how to use such denoisers in order to handle the SISR and the VSR problems using a unified formulation and framework. This way, we benefit from the effectiveness and efficiency of existing image/video denoising algorithms, while solving much more challenging problems. More specifically, harnessing the VBM3D video denoiser, we obtain a strongly competitive motion-estimation free VSR algorithm, showing tendency to a high-quality output and fast processing. <s> BIB021 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> B. Super-resolution <s> Most learning-based super-resolution (SR) methods aim to recover high-resolution (HR) image from a given low-resolution (LR) image via learning on LR-HR image pairs. The SR methods learned on synthetic data do not perform well in real-world, due to the domain gap between the artificially synthesized and real LR images. Some efforts are thus taken to capture real-world image pairs. The captured LR-HR image pairs usually suffer from unavoidable misalignment, which hampers the performance of end-to-end learning, however. Here, focusing on the real-world SR, we ask a different question: since misalignment is unavoidable, can we propose a method that does not need LR-HR image pairing and alignment at all and utilize real images as they are? Hence we propose a framework to learn SR from an arbitrary set of unpaired LR and HR images and see how far a step can go in such a realistic and "unsupervised" setting. To do so, we firstly train a degradation generation network to generate realistic LR images and, more importantly, to capture their distribution (i.e., learning to zoom out). Instead of assuming the domain gap has been eliminated, we minimize the discrepancy between the generated data and real data while learning a degradation adaptive SR network (i.e., learning to zoom in). The proposed unpaired method achieves state-of-the-art SR results on real-world images, even in the datasets that favor the paired-learning methods more. <s> BIB022
Super-resolution (SR) is a challenging visual perception task to generate high-resolution (HR) images from low-resolution (LR) image inputs . SR is crucial to understand the environment at high-level for autonomous systems. For example, SR is helpful to construct dense map. In this subsection, we will first discuss the recent developments in SR by focusing on accuracy. Then, we will summarize the new development in SR by considering transferability. There are a number of methods dedicated to improve image quality, such as single image interpolation BIB001 and image restoration BIB010 . It is worthy pointing out that they are different from SR. On the one hand, single image interpolation usually cannot restore high-frequency details BIB001 . In addition, image restoration often uses method, like image sharpening, in which the input image and output image remain the same size, although the output quality can be improved BIB010 . SR does not only improve the output quality, but also increases the number of pixels per unit area, i.e., the size of image increases . Recently, a large number of SR methods have been proposed, such as interpolation-based methods BIB002 and reconstruction-based methods BIB004 , etc. Farsiu et al. BIB003 introduced the advances and challenges of traditional methods for SR. Traditional DL-based SR. There are some results studying traditional DL-based methods without adversarial learning for SR, which are mainly CNN-based. Dong et al. BIB006 considered using CNNs to handle SR tasks in an end-to-end manner. They presented the super-resolution convolutional neural network (SRCNN), which has little extra pre/post-processing beyond optimization. In addition, they confirmed that DL provides a better quality and speed for SR than the sparse coding method BIB005 and the K-SVD-based method , while SRCNN only uses information on the luminance channel. Dong et al. BIB007 then extended SRCNN to process three color channels simultaneously to improve the accuracy of SR results. Considering the poor real-time performance of SRCNN, Dong et al. utilized a compact hourglass-shape CNN structure to accelerate the current SRCNN BIB008 . In fact, most learningbased SR methods use the per-pixel loss between the output image and the ground-truth image BIB006 , BIB007 . Johnson et al. BIB009 considered the use of perceptual loss to achieve a better SR, which is able to better reconstruct details than the per-pixel loss. Note that the above mentioned SR methods often rely on specific training data. When there are nonideal imaging conditions due to noise or compression artifacts, the above methods usually fail to provide good SR results. Therefore, Shocher et al. BIB013 considered "Zero-Shot" SR (ZSSR), which does not rely on prior training. To the best of our knowledge, ZSSR is the first unsupervised CNN-based SR method, which achieves reasonable SR results in some complex or unknown imaging conditions. Due to the lack of recurrence of blurry LR images, ZSSR is less effective for SR, when facing very blurry LR images. By taking into this issue, Zhang et al. BIB017 proposed a deep plug-and-play SR framework for LR images with arbitrary blur kernels. This modified framework is flexible and effective to deal with very blurry LR images. Recent trends in SR also included SR for stereo images BIB018 and 3D appearance BIB019 . GANs-based SR. In addition to the traditional DL-based SR methods, GANs show their promising results in SR. The use of GANs for SR has the advantage of bringing the generated results closer to the natural image manifold, which may improve the accuracy of the result BIB011 . The representative work on GANs-based SR (SRGAN) was presented by Ledig et al. BIB011 , which combines a content loss with an adversarial loss by training GANs. This method is capable of reconstructing photo-realistic natural images for a 4× upscaling factors. Although the SRGAN achieves good SR results, the local matching of texture statistics is not considered, which may restrict the improvement of the SR results to some extent. By considering this point, Sajjadi et al. BIB012 focused on creating realistic textures to achieve SR. They proposed EnhanceNet, which combines adversarial training, perceptual loss, and a newly proposed texture transfer loss to achieve high-resolution results with realistic textures. In order to further improve the accuracy of SRGAN, Wang et al. BIB014 extended SRGAN to ESRGAN by introducing residual-in-residual dense block and improving the discriminator and a perceptual loss. ESRGAN consistently has a better visual quality and natural texture than BIB011 , as shown in Fig. 4 . HR images are conducive to improving the accuracy of perception tasks in autonomous systems. In autonomous systems, more complicated situations may be encountered, such as HR datasets are unavailable or the input LR images are noisy and blurry, which means that SR cannot be achieved with paired data. Inspired by the cycle consistency of CycleGAN, Yuan et al. BIB015 tackled these issues with a Cycle-in-Cycle network (CinCGAN), which consists of two CycleGANs. The first CycleGAN maps LR images to the clean LR space, in which the proper denoising/deblurring processing is implemented on the original LR input. Then they stacked another well-trained deep model to up-sample the intermediate results to the desired size. Finally, they used adversarial learning to fine-tune the network in an end-to-end manner. The second CycleGAN contains the first one to achieve the purpose of mapping from the original LR to the HR. CinCGAN achieves comparable results to the supervised method BIB008 . Most SR methods trained on synthetic datasets are not effective in the real-world. SRGAN and EnhanceNet increase the perceptual quality by enhancing textures, which often produce fake details and unnatural artifacts. Soh et al. BIB020 focused on the naturalness of the results to reconstruct realistic HR images. Further considering the transferability of the model, in order to solve the domain shift between synthetic data and real-world data, Gong et al. BIB022 proposed to further minimize the domain gap by aligning the feature distribution while achieving SR. Specifically, they proposed a method to learn real-world SR images from a set of unpaired LR and HR images, which achieves satisfactory SR results on both paired and unpaired datasets. It is difficult to directly extend the image SR methods to the video SR. Recent developments included using the same framework to implement image SR and video SR BIB021 , and real-time video SR using GANs BIB016 .
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> In this paper, we propose a new method that can remove view-disturbing noises from stereo images. One of the thorny problems in outdoor surveillance by a camera is that adherent noises such as waterdrops on the protecting glass surface lens disturb the view from the camera. Therefore, we propose a method for removing adherent noises from stereo images taken with a stereo camera system. Our method is based on the stereo measurement and utilizes disparities between stereo image pair. Positions of noises in images can be detected by comparing disparities measured from stereo images with the distance between the stereo camera system and the glass surface. True disparities of image regions hidden by noises can be estimated from the property that disparities are generally similar with those around noises. Finally, we can remove noises from images by replacing the above regions with textures of corresponding image regions obtained by the disparity referring. Experimental results show the effectiveness of the proposed method. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> This paper describes a method for removing adherent noises from image sequences. In outdoor environments, it is often the case that scenes taken by a camera are deteriorated because of adherent noises such as waterdrops on the surface of the lens-protecting glass of the camera. To solve this problem, our method takes advantage of image sequences captured with a moving camera whose motion is unknown. Our method estimates a camera motion only from image sequences, and makes a spatio-temporal image to extract the regions of adherent noises by examining differences of track slopes in cross section images between adherent noises and other objects. Finally, regions of noises are eliminated by replacing with image data corresponding to object regions. Experimental results show the effectiveness of our method. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean/corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> We describe a learning-based approach to blind image deconvolution. It uses a deep layered architecture, parts of which are borrowed from recent work on neural network learning, and parts of which incorporate computations that are specific to image deconvolution. The system is trained end-to-end on a set of artificially generated training examples, enabling competitive performance in blind deconvolution, both with respect to quality and runtime. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Modeling, detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatio-temporal derivatives of raindrops. To accomplish the idea, we first model adherent raindrops using law of physics, and detect raindrops based on these models in combination with motion and intensity temporal derivatives of the input video. Having detected the raindrops, we remove them and restore the images based on an analysis that some areas of raindrops completely occludes the scene, and some other areas occlude only partially. For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity derivative. For completely occluding areas, we recover them by using a video completion technique. Experimental results using various real videos show the effectiveness of our method. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Single image haze removal is an extremely challenging problem due to its inherent ill-posed nature. Several prior-based and learning-based methods have been proposed in the literature to solve this problem and they have achieved superior results. However, most of the existing methods assume constant atmospheric light model and tend to follow a two- step procedure involving prior-based methods for estimating transmission map followed by calculation of dehazed image using the closed form solution. In this paper, we relax the constant atmospheric light assumption and propose a novel unified single image dehazing network that jointly estimates the transmission map and performs dehazing. In other words, our new approach provides an end-to-end learning framework, where the inherent transmission map and dehazed result are learned directly from the loss function. Extensive experiments on synthetic and real datasets with challenging hazy images demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> We present DeblurGAN, an end-to-end learned method for motion deblurring. The learning is based on a conditional GAN and the content loss. DeblurGAN achieves state-of-the art performance both in the structural similarity measure and visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem - object detection on (de-)blurred images. The method is 5 times faster than the closest competitor - Deep-Deblur [25]. We also introduce a novel method for generating synthetic motion blurred images from sharp ones, allowing realistic dataset augmentation. The model, code and the dataset are available at https://github.com/KupynOrest/DeblurGAN <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> In this paper, we present an end-to-end network, called Cycle-Dehaze, for single image dehazing problem, which does not require pairs of hazy and corresponding ground truth images for training. That is, we train the network by feeding clean and hazy images in an unpaired manner. Moreover, the proposed approach does not rely on estimation of the atmospheric scattering model parameters. Our method enhances CycleGAN formulation by combining cycle-consistency and perceptual losses in order to improve the quality of textural information recovery and generate visually better haze-free images. Typically, deep learning models for dehazing take low resolution images as input and produce low resolution outputs. However, in the NTIRE 2018 challenge on single image dehazing, high resolution images were provided. Therefore, we apply bicubic downscaling. After obtaining low-resolution outputs from the network, we utilize the Laplacian pyramid to upscale the output images to the original resolution. We conduct experiments on NYU-Depth, I-HAZE, and O-HAZE datasets. Extensive experiments demonstrate that the proposed approach improves CycleGAN method both quantitatively and qualitatively. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> In this paper, we present an algorithm to directly restore a clear image from a hazy image. This problem is highly ill-posed and most existing algorithms often use hand-crafted features, e.g., dark channel, color disparity, maximum contrast, to estimate transmission maps and then atmospheric lights. In contrast, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear image is estimated by an end-to-end trainable neural network. Different from the generative network in basic cGAN, we propose an encoder and decoder architecture so that it can generate better results. To generate realistic clear images, we further modify the basic cGAN formulation by introducing the VGG features and an L1-regularized gradient prior. We also synthesize a hazy dataset including indoor and outdoor scenes to train and evaluate the proposed algorithm. Extensive experimental results demonstrate that the proposed method performs favorably against the state-of-the-art methods on both synthetic dataset and real world hazy images. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility. DeblurGAN-v2 is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-v2. It can flexibly work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of sophisticated backbones (e.g., Inception-ResNet-v2) can lead to solid state-of-the-art deblurring. Meanwhile, with light-weight backbones (e.g., MobileNet and its variants), DeblurGAN-v2 reaches 10-100 times faster than the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time video deblurring. We demonstrate that DeblurGAN-v2 obtains very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. Besides, we show the architecture to be effective for general image restoration tasks too. Our codes, models and data are available at: this https URL <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Blind deconvolution problems are heavily ill-posed where the specific blurring kernel is not known. Recovering these images typically requires estimates of the kernel. In this paper, we present a method called Dr-Net, which does not require any such estimate and is further able to invert the effects of the blurring in blind image recovery tasks. These image recovery problems typically have two terms, the data fidelity term (for faithful reconstruction) and the image prior (for realistic looking reconstructions). We use the Douglas-Rachford iterations to solve this problem since it is a more generally applicable optimization procedure than methods such as the proximal gradient descent algorithm. Two proximal operators originate from these iterations, one from the data fidelity term and the second from the image prior. It is non-trivial to design a hand-crafted function to represent these proximal operators for the data fidelity and the image prior terms which would work with real-world image distributions. We therefore approximate both these proximal operators using deep networks. This provides a sound motivation for the final architecture for Dr-Net which we find outperforms the state-of-the-art on two mainstream blind deconvolution benchmarks. We also find that Dr-Net is one of the fastest algorithms according to wall-clock times while doing so. <s> BIB018 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Image deblurring aims to restore the latent sharp images from the corresponding blurred ones. In this paper, we present an unsupervised method for domain-specific, single-image deblurring based on disentangled representations. The disentanglement is achieved by splitting the content and blur features in a blurred image using content encoders and blur encoders. We enforce a KL divergence loss to regularize the distribution range of extracted blur attributes such that little content information is contained. Meanwhile, to handle the unpaired training data, a blurring branch and the cycle-consistency loss are added to guarantee that the content structures of the deblurred results match the original images. We also add an adversarial loss on deblurred results to generate visually realistic images and a perceptual loss to further mitigate the artifacts. We perform extensive experiments on the tasks of face and text deblurring using both synthetic datasets and real images, and achieve improved results compared to recent state-of-the-art deblurring methods. <s> BIB019 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Nowadays stereo cameras are more commonly adopted in emerging devices such as dual-lens smartphones and unmanned aerial vehicles. However, they also suffer from blurry images in dynamic scenes which leads to visual discomfort and hampers further image processing. Previous works have succeeded in monocular deblurring, yet there are few studies on deblurring for stereoscopic images. By exploiting the two-view nature of stereo images, we propose a novel stereo image deblurring network with Depth Awareness and View Aggregation, named DAVANet. In our proposed network, 3D scene cues from the depth and varying information from two views are incorporated, which help to remove complex spatially-varying blur in dynamic scenes. Specifically, with our proposed fusion network, we integrate the bidirectional disparities estimation and deblurring into a unified framework. Moreover, we present a large-scale multi-scene dataset for stereo deblurring, containing 20,637 blurry-sharp stereo image pairs from 135 diverse sequences and their corresponding bidirectional disparities. The experimental results on our dataset demonstrate that DAVANet outperforms state-of-the-art methods in terms of accuracy, speed, and model size. <s> BIB020 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Outdoor scene images generally undergo visibility degradation in presence of aerosol particles such as haze, fog and smoke. The reason behind this is, aerosol particles scatter the light rays reflected from the object surface and thus results in attenuation of light intensity. Effect of haze is inversely proportional to the transmission coefficient of the scene point. Thus, estimation of accurate transmission map (TrMap) is a key step to reconstruct the haze-free scene. Previous methods used various assumptions/priors to estimate the scene TrMap. Also, available end-to-end dehazing approaches make use of supervised training to anticipate the TrMap on synthetically generated paired hazy images. Despite the success of previous approaches, they fail in real-world extreme vague conditions due to unavailability of the real-world hazy image pairs for training the network. Thus, in this paper, Cycle-consistent generative adversarial network for single image De-hazing named as CDNet is proposed which is trained in an unpaired manner on real-world hazy image dataset. Generator network of CDNet comprises of encoder-decoder architecture which aims to estimate the object level TrMap followed by optical model to recover the haze-free scene. We conduct experiments on four datasets namely: D-HAZY [1], Imagenet [5], SOTS [20] and real-world images. Structural similarity index, peak signal to noise ratio and CIEDE2000 metric are used to evaluate the performance of the proposed CDNet. Experiments on benchmark datasets show that the proposed CDNet outperforms the existing state-of-the-art methods for single image haze removal. <s> BIB021 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> In this letter we introduce a simple but effective concept, Color Channel Transfer (CCT), that is able to substantially improve the performance of various dehazing techniques. CCT is motivated by a key observation: in scattering media the information from at least one color channel presents high attenuation. To compensate for the loss of information in one color channel, CCT employs a color-transfer strategy and operates in a color opponent space that helps to compensate automatically the chromatic loss. The reference is computed by combining the details and saliency of the initial image with uniform gray image that assures a balanced chromatic distribution. The extensive qualitative and quantitative experiments demonstrate the utility of CCT as a preprocessing step for various dehazing problems such as day-time dehazing, night-time dehazing, and underwater image dehazing. <s> BIB022 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> We propose an end-to-end trainable Convolutional Neural Network (CNN), named GridDehazeNet, for single image dehazing. The GridDehazeNet consists of three modules: pre-processing, backbone, and post-processing. The trainable pre-processing module can generate learned inputs with better diversity and more pertinent features as compared to those derived inputs produced by hand-selected pre-processing methods. The backbone module implements a novel attention-based multi-scale estimation on a grid network, which can effectively alleviate the bottleneck issue often encountered in the conventional multi-scale approach. The post-processing module helps to reduce the artifacts in the final output. Experimental results indicate that the GridDehazeNet outperforms the state-of-the-arts on both synthetic and real-world images. The proposed hazing method does not rely on the atmosphere scattering model, and we provide an explanation as to why it is not necessarily beneficial to take advantage of the dimension reduction offered by the atmosphere scattering model for image dehazing, even if only the dehazing results on synthetic images are concerned. <s> BIB023 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Most deraining works focus on rain streaks removal but they cannot deal adequately with heavy rain images. In heavy rain, streaks are strongly visible, dense rain accumulation or rain veiling effect significantly washes out the image, further scenes are relatively more blurry, etc. In this paper, we propose a novel method to address these problems. We put forth a 2-stage network: a physics-based backbone followed by a depth-guided GAN refinement. The first stage estimates the rain streaks, the transmission, and the atmospheric light governed by the underlying physics. To tease out these components more reliably, a guided filtering framework is used to decompose the image into its low- and high-frequency components. This filtering is guided by a rain-free residue image --- its content is used to set the passbands for the two channels in a spatially-variant manner so that the background details do not get mixed up with the rain-streaks. For the second stage, the refinement stage, we put forth a depth-guided GAN to recover the background details failed to be retrieved by the first stage, as well as correcting artefacts introduced by that stage. We have evaluated our method against the state of the art methods. Extensive experiments show that our method outperforms them on real rain image data, recovering visually clean images with good details. <s> BIB024 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Abstract Single image rain removal plays an important role in numerous multimedia applications. Existing algorithms usually tackle the deraining problem by the way of signal removal, which lead to over-smoothness and generate unexpected artifacts in de-rained images. This paper addresses the deraining problem from a completely different perspective of feature-wise disentanglement, and introduces the interactions and constraints between two disentangled latent spaces. Specifically, we propose an Asynchronous Interactive Generative Adversarial Network (AI-GAN) to progressively disentangle the rainy image into background and rain spaces in feature level through a two-branch structure. Each branch employs a two-stage synthesis strategy and interacts asynchronously by exchanging feed-forward information and sharing feedback gradients, achieving complementary adversarial optimization. This ‘adversarial’ is not only the ‘adversarial’ between the generator and the discriminator, but also means that the two generators are entangled, and interact with each other in the optimization process. Extensive experimental results demonstrate that AI-GAN outperforms state-of-the-art deraining methods and benefits various typical multimedia applications such as Image/Video Coding, Action Recognition, and Person Re-identification. <s> BIB025 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> C. Image deblurring & Image dehazing & Image rain removal <s> Single image dehazing is a critical stage in many modern-day autonomous vision applications. Early prior-based methods often involved a time-consuming minimization of a hand-crafted energy function. Recent learning-based approaches utilize the representational power of deep neural networks (DNNs) to learn the underlying transformation between hazy and clear images. Due to inherent limitations in collecting matching clear and hazy images, these methods resort to training on synthetic data, constructed from indoor images and corresponding depth information. This may result in a possible domain shift when treating outdoor scenes. We propose a completely unsupervised method of training via minimization of the well-known, Dark Channel Prior (DCP) energy function. Instead of feeding the network with synthetic data, we solely use real-world outdoor images and tune the network’s parameters by directly minimizing the DCP. Although our “Deep DCP” technique can be regarded as a fast approximator of DCP, it actually improves its results significantly. This suggests an additional regularization obtained via the network and learning process. Experiments show that our method performs on par with large-scale supervised methods. <s> BIB026
Autonomous systems often encounter poor weather conditions, such as rain and fog, etc. There also exist blurry images due to poor shooting conditions or fast moving objects. It is well-recognized that the accuracy of computer vision tasks heavily depends on the quality of input images. Hence, it is of great importance to study image deblurring/dehazing/rain removal for autonomous systems, which make the high-level understanding tasks like semantic segmentation and depth estimation possible in practical applications of autonomous systems. When adversarial learning, like GANs, is used for image deblurring/dehazing/rain removal tasks, it can not only generate realistic images to improve the accuracy of image recovery, but also improve the transferability of the models, by considering the transfer from synthetic datasets to real-world images. Image deblurring. Image blur is widely observed in autonomous systems, which heavily affects the understanding of the surroundings. In order to tackle the problem of image deblurring, several traditional DL-based methods without adversarial learning have been proposed successively BIB004 , BIB005 , BIB009 . Considering the convincing performance of GANs in preserving image textures and creating realistic images, as well as inspired by image-to-image translation with GANs, Kupyn et al. BIB013 regarded image deblurring as a special image-toimage translation task. They proposed DeblurGAN, which is an end-to-end deblurring learning method based on cGANs. This method considers both accuracy and transferability, i.e., DeblurGAN improves deblurring results and it is 5 times faster than BIB009 for both synthetic and real-world blurry images. Then Kupyn et al. BIB017 futher improved DeblurGAN by adding a feature pyramid network to G and adopting a double-scale D, which is called DeblurGAN-v2. DeblurGAN-v2 achieves better accuracy than DeblurGAN while being 10 ∼ 100 times faster than competitors, which will make it applicable to real-time video deblurring, as shown in Fig. 4 . Recently, Aljadaany et al. BIB018 presented Dr-Net, which combines douglas-rachford iterations and Wasserstein-GAN BIB010 to solve image deblurring without knowing the specific blurring kernel. In addition, Lu et al. BIB019 extracted the content and blur features separately from blurred images to encode the blur features accurately into the deblurring framework. They also utilized the cycle-consistency loss to preserve the content structure of the original image. Considering that stereo cameras are more commonly used in unmanned aerial vehicles, Zhou et al. BIB020 focused the research on the deblurring of stereo images. . Image deblurring results BIB017 ; (c). Image dehazing results BIB021 ; (d). Image rain removal results BIB014 . Image dehazing. Haze is a typical weather phenomenon with poor visibility, which forms a major obstacle for computer vision applications. Image dehazing is designed to recover clear scene reflections, atmospheric light colors, and transmission maps from input images BIB015 . In recent years, a series of learning-based image dehazing methods have been proposed BIB006 , BIB007 , BIB011 . Although these methods do not require prior information, their dependence on parameters and models may severely cause an impact on the quality of dehazing images. In order to reduce the effects of intermediate parameters on the model, and to establish an image dehazing method with good transferability, a series of GAN-based methods have been proposed for image dehazing. Li et al. BIB016 tackled the image dehazing based on cGAN. Different from the basic cGAN, the generator in this method includes an encoder and decoder architecture, which helps the generator to capture more useful features to generate realistic results. The addition of cGAN makes the method in BIB016 achieve ideal results on both synthetic datasets and real-world hazy images. Considering the transferability of different scenarios and datasets, as well as independent of paired images, Engin et al. BIB015 proposed Cycle-Dehaze network by utilizing CycleGAN. This approach adds the cyclic perception-consistency loss and the cycleconsistency loss, thereby achieving image dehazing across data sets with unpaired images. Similar bidirectional GANs for dehazing have also been studied in . It is difficult for Cycle-Dehaze network to reconstruct real scene information without color distortion. Therefore, Dudhane et al. BIB021 proposed the cycle-consistent generative adversarial network (CDNet), which utilized the optical model to find the haze distribution from the depth information. CDNet ensures that the fog-free scene is obtained without color distortion. The image dehazing results of Cycle-Dehaze and CDNet are shown in Fig. 4 . Most image dehazing methods only consider objects at the same scale-space, which will make dehazed images suffer from blurriness and halo artifacts. Sharma et al. [140] considered improving the accuracy and transferability of image dehazing, and presented an approach, which can remove haze based on per-pixel difference between Laplacians of Gaussian (LoG) of hazed images and original haze-free images at a scale-space. The model showed compelling results from simulated data sets to real-world maps, from indoor to outdoor. Recent developments in image dehazing also included targeting different channels, such as color channel BIB022 , dark channel BIB026 , and multi-scale networks BIB023 . Image rain removal. Image rain removal is a challenging task, because the size, number and shape of raindrops are usually uncertain and difficult to learn. A number of methods have been proposed for image rain removal, but most of them require stereo image pairs BIB001 , image sequences BIB002 , or motion-based images BIB008 . Eigen et al. BIB003 proposed a single image rain removal method, which is limited to dealing with relatively sparse and small raindrops. In order to improve the accuracy of the image rain removal results, consider the outstanding performance of GANs in the image inpainting or completion problems, a series of GAN-based methods have been used for image rain removal. Qian et al. BIB014 tackled the heavy raindrop removal from a single image using an attentive GAN. This method uses an attention map in both the generator and the discriminator. The generator produces an attention map through an attentionrecurrent network and generates a raindrop-free image together with the input image. The discriminator evaluates the validity of the generation both globally and locally. The rain removal results of BIB003 and BIB014 are shown in Fig. 4 . Nevertheless, this method is not suitable for torrential rain removal and is limited to raindrop removal. Considering heavy rain weather, strongly visible streaks or dense rain accumulations make the scene less visible and considering the transferability from the synthetic datasets to the real-world images. Li et al. BIB024 considered the heavy rain situation and introduced an integrated two-stage CNN, which is able to remove rain streaks and rain accumulation simultaneously. In the first physics-based stage, a streak-aware decomposition module was proposed to decompose entangled rain streaks and rain accumulation to extract joint features. The second refinement stage utilized a cGAN that inputs the reconstructed map of the previous level and generates the final clean image. This method considered the transferability between the synthetic datasets and realworld images, and has achieved convincing results in both synthetic and real heavy rain scenarios. In order to improve the stability of GANs and reduce artifacts introduced by GANs in the output images, Zhang et al. BIB012 proposed an image de-raining conditional generative adversarial network (ID-CGAN), which uses a multi-scale discriminator to leverage features from different scales to determine whether the derained image are from real data or generated ones. ID-CGAN has obtained satisfactory image rain removal results on both the synthetic dataset and real-world images. Jin et al. BIB025 considered that existing methods may cause over-smoothing in derained images, and therefore they solved the problem from the perspective of feature disentanglement. They introduced an asynchronous interactive generative adversarial network (AI-GAN), which not only has achieved good results of image rain removal, but also has strong generalization capabilities, which can be used for image/video encoding, action recognition and person re-identification.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just \(\tfrac{1}{3}\) of the CamVid training set outperform models trained on the complete CamVid training set. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and/or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. The motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net. Our experiments show that our adversarial training approach leads to improved accuracy on the Stanford Background and PASCAL VOC 2012 datasets. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we describe the terminology of this field as well as mandatory background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and their targets. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: this https URL <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL . <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the"style"in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Most methods for object instance segmentation require all training examples to be labeled with segmentation masks. This requirement makes it expensive to annotate new categories and has restricted instance segmentation models to ~100 well-annotated classes. The goal of this paper is to propose a new partially supervised training paradigm, together with a novel weight transfer function, that enables training instance segmentation models on a large set of categories all of which have box annotations, but only a small fraction of which have mask annotations. These contributions allow us to train Mask R-CNN to detect and segment 3000 visual concepts using box annotations from the Visual Genome dataset and mask annotations from the 80 classes in the COCO dataset. We evaluate our approach in a controlled study on the COCO dataset. This work is a first step towards instance segmentation models that have broad comprehension of the visual world. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0\% and 82.1\% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at \url{https://github.com/tensorflow/models/tree/master/research/deeplab}. <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-toimage translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> In recent years, deep neural nets have triumphed over many computer vision problems, including semantic segmentation, which is a critical task in emerging autonomous driving and medical image diagnostics applications. In general, training deep neural nets requires a humongous amount of labeled data, which is laborious and costly to collect and annotate. Recent advances in computer graphics shed light on utilizing photo-realistic synthetic data with computer generated annotations to train neural nets. Nevertheless, the domain mismatch between real images and synthetic ones is the major challenge against harnessing the generated data and labels. In this paper, we propose a principled way to conduct structured domain adaption for semantic segmentation, i.e., integrating GAN into the FCN framework to mitigate the gap between source and target domains. Specifically, we learn a conditional generator to transform features of synthetic images to real-image like features, and a discriminator to distinguish them. For each training batch, the conditional generator and the discriminator compete against each other so that the generator learns to produce real-image like features to fool the discriminator; afterwards, the FCN parameters are updated to accommodate the changes of GAN. In experiments, without using labels of real image data, our method significantly outperforms the baselines as well as state-of-the-art methods by 12% ~ 20% mean IoU on the Cityscapes dataset. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering. <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> The widespread adoption of autonomous systems such as drones and assistant robots has created a need for real-time high-quality semantic scene segmentation. In this paper, we propose an efficient yet robust technique for on-the-fly dense reconstruction and semantic segmentation of 3D indoor scenes. To guarantee (near) real-time performance, our method is built atop an efficient super-voxel clustering method and a conditional random field with higher-order constraints from structural and object cues, enabling progressive dense semantic segmentation without any precomputation. We extensively evaluate our method on different indoor scenes including kitchens, offices, and bedrooms in the SceneNN and ScanNet datasets and show that our technique consistently produces state-of-the-art segmentation results in both qualitative and quantitative experiments. <s> BIB018 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> This paper proposes a hierarchical depthwise graph convolutional neural network (HDGCN) for point cloud semantic segmentation. The main chanllenge for learning on point clouds is to capture local structures or relationships. Graph convolution has the strong ability to extract local shape information from neighbors. Inspired by depthwise convolution, we propose a depthwise graph convolution which requires less memory consumption compared with the previous graph convolution. While depthwise graph convolution aggregates features channel-wisely, pointwise convolution is used to learn features across different channels. A customized block called DGConv is specially designed for local feature extraction based on depthwise graph convolution and pointwise convolution. The DGConv block can extract features from points and transfer features to neighbors while being invariant to different point orders. HDGCN is constructed by a series of DGConv blocks using a hierarchical structure which can extract both local and global features of point clouds. Experiments show that HDGCN achieves the state-of-the-art performance in the indoor dataset S3DIS and the outdoor dataset Paris-Lille-3D. <s> BIB019 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> This research was supported by competitive funding from King Abdullah University of Science and Technology (KAUST). Further support was received by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract number D17PC00280. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, ::: or the U.S. Government. <s> BIB020 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another (e.g., synthetic to real images). The adapted representations often do not capture pixel-level domain shifts that are crucial for dense prediction tasks (e.g., semantic segmentation). In this paper, we present a novel pixel-wise adversarial domain adaptation algorithm. By leveraging image-to-image translation methods for data augmentation, our key insight is that while the translated images between domains may differ in styles, their predictions for the task should be consistent. We exploit this property and introduce a cross-domain consistency loss that enforces our adapted model to produce consistent predictions. Through extensive experimental results, we show that our method compares favorably against the state-of-the-art on a wide variety of unsupervised domain adaptation tasks. <s> BIB021 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> We consider the problem of unsupervised domain adaptation in semantic segmentation. The key in this campaign consists in reducing the domain shift, i.e., enforcing the data distributions of the two domains to be similar. A popular strategy is to align the marginal distribution in the feature space through adversarial learning. However, this global alignment strategy does not consider the local category-level feature distribution. A possible consequence of the global movement is that some categories which are originally well aligned between the source and target may be incorrectly mapped. To address this problem, this paper introduces a category-level adversarial network, aiming to enforce local semantic consistency during the trend of global alignment. Our idea is to take a close look at the category-level data distribution and align each class with an adaptive adversarial loss. Specifically, we reduce the weight of the adversarial loss for category-level aligned features while increasing the adversarial force for those poorly aligned. In this process, we decide how well a feature is category-level aligned between source and target by a co-training approach. In two domain adaptation tasks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, we validate that the proposed method matches the state of the art in segmentation accuracy. <s> BIB022 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Domain adaptation for semantic image segmentation is very necessary since manually labeling large datasets with pixel-level labels is expensive and time consuming. Existing domain adaptation techniques either work on limited datasets, or yield not so good performance compared with supervised learning. In this paper, we propose a novel bidirectional learning framework for domain adaptation of segmentation. Using the bidirectional learning, the image translation model and the segmentation adaptation model can be learned alternatively and promote to each other.Furthermore, we propose a self-supervised learning algorithm to learn a better segmentation adaptation model and in return improve the image translation model. Experiments show that our method superior to the state-of-the-art methods in domain adaptation of segmentation with a big margin. The source code is available at https://github.com/liyunsheng13/BDL <s> BIB023 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> D. Semantic segmentation <s> Semantic information provides a valuable source for scene understanding around autonomous vehicles in order to plan their actions and make decisions. However, varying weather conditions reduce the accuracy of the semantic segmentation. We propose a method to adapt to varying weather conditions without supervision, namely without labeled data. We update the parameters of a deep neural network (DNN) model that is pre-trained on the known weather condition (source domain) to adapt it to the new weather conditions (target domain) without forgetting the segmentation in the known weather condition. Furthermore, we don’t require the labels from the source domain during adaptation training. The parameters of the DNN are optimized to reduce the distance between the distribution of the features from the images of old and new weather conditions. To measure this distance, we propose three alternatives: W-GAN, GAN and maximum-mean discrepancy (MMD). We evaluate our method on various datasets with varying weather conditions. The results show that the accuracy of the semantic segmentation is improved for varying conditions after adaptation with the proposed method. <s> BIB024
In emerging autonomous systems, such as autonomous driving and indoor navigation, scene understanding is required by means of semantic segmentation. Semantic segmentation is a pixel-level prediction method that can classify each pixel into different categories corresponding to their labels, such as airplanes, cars, traffic signs, or even backgrounds BIB006 . In addition, instance segmentation combines semantic segmentation and object detection to further distinguish object categories in the scene BIB007 . Some traditional DL-based methods without adversarial learning have been proposed and have achieved good accuracy of semantic segmentation BIB008 , BIB012 and instance segmentation BIB007 , BIB013 . In practice, such annotations of pixel-level semantic information are usually expensive to obtain. Considering that the semantic labels of synthetic datasets are easy to obtain, it is helpful to consider semantic segmentation on labeled synthetic datasets and then transfer the results to real-world applications. Due to the domain shift between synthetic datasets and real-world images, it is worth exploring how to transfer the model trained on synthetic datasets to real-world images. By considering this point, adversarial learning is used to implement domain adaptation to improve the transferability of the model. Like other computer vision tasks in this review, the trend is now moving from improving accuracy to enhancing transferability. In this subsection, we focus on accuracy and transferability to review semantic segmentation and instance segmentation tasks, respectively. Traditional DL-based semantic segmentation. Traditional DL-based semantic segmentation algorithms are mainly based on end-to-end convolutional network frameworks. To the best of our knowledge, Long et al. BIB001 were the first to train an end-to-end fully convolutional network (FCN) for semantic segmentation. The main insight is to replace fully connected layers with fully convolutional layers to output spatial maps. In addition, they defined a skip architecture to enhance the segmentation results. More importantly, the framework is suitable for input images of arbitrary size and can produce the correspondingly-sized output. This work is wellrecognized as a milestone for semantic segmentation using DL. However, because the encoder network of this method has a large number of trainable parameters, the overall size of the network is large, which results in the difficulty to train FCN. Badrinarayanan et al. BIB008 proposed SegNet, which has significantly fewer trainable parameters and can be trained in an end-to-end manner using SGD. SegNet is important in that the decoder performs the non-linear upsampling using the pooling index computed in the max-pooling step of the corresponding encoder, which eliminates the need to learn upsampling. Based on the encoder-decoder network of SegNet, DeepLab uses multi-scale contextual information to enrich semantic information. DeepLab proposed a series of semantic segmentation methods, like DeepLabv3+ BIB014 , that combines a spatial pyramid pooling module and an encoder-decoder structure for semantic segmentation. In addition, the depthwise separable convolution is applied to both atrous spatial pyramid pooling and the decoder module to make the encoder-decoder network faster and stronger. The accuracy of unsupervised semantic segmentation tasks is usually worse than that of supervised methods, while supervised semantic segmentation often requires a lot of manual labeling, which is very costly. Note that a synthetic dataset with computer simulation like Grand Thief Auto (GTA) BIB002 can automatically label a large number of semantic tags, which is very important to improve the accuracy of the semantic segmentation model. However, due to the domain shift between the synthetic dataset and the real-world scene, it is necessary to consider domain adaptation in the semantic segmentation task. In order to address the domain gap problem and improve the transferability of the model, Hoffman et al. BIB003 proposed a domain adaptation framework with FCN for semantic segmentation, as shown in Fig. 5 . This method considers aligning both global and local features through some specific adaptation techniques. This method makes full use of the label information of the synthesized data set, and successfully transfers the results from a synthetic dataset to the real scene, in which a satisfactory semantic segmentation result is achieved in practical applications. The same combination of FCN with domain adaptation for semantic segmentation was also presented in BIB012 . Zhang et al. BIB012 presented fully convolutional adaptation networks (FCANs), which also successfully explored domain adaptation for semantic segmentation. The model combines appearance adaptation networks and representation adaptation networks to synthesize images for domain adaption at both the visual appearance-level and the representation-level. Recent developments in semantic segmentation also involved 3D semantic segmentation BIB018 , BIB019 and 3D instance segmentation BIB020 . Traditional DL-based instance segmentation. The more challenging task is instance segmentation, which combines (c) Multi-task. Fig. 5 : Generative adversarial networks for semantic segmentation and multi-task. (a). CycleGAN for semantic segmentation BIB009 ; (b). Qualitative results on adaptation from cities in SYNTHIA fall to cities in SYNTHIA winter BIB003 ; (c). Multi-task includes semantic segmentation (top row), depth prediction (middle row), to optical flow estimation (bottom row) BIB021 . both object detection and semantic segmentation BIB007 . Li et al. first proposed an end-to-end fully convolutional method for instance-aware semantic segmentation. However, the method produced spurious edges on overlapping instances. He et al. BIB007 proposed Mask R-CNN, which is a classic instance segmentation algorithm. Mask R-CNN is easy to train and to generalize to other tasks, and performs breakthrough results in instance segmentation, bounding-box object detection, and person keypoint detection. This method includes two stages. The first stage proposes a candidate object bounding box. In the second stage, the prediction class and the box offset are in parallel, and the network outputs a binary mask for each region of interest (RoI). Mask R-CNN implements instance segmentation in a supervised manner, which is very expensive to semantic labels. In view of this, Hu et al. BIB013 proposed a solution to a large-scale instance segmentation by developing a partially supervised learning paradigm, in which only a small part of the training process had instance masks, and the rest had box annotations. This method has demonstrated exciting new research directions in large-scale instance segmentation. GANs-based semantic segmentation. GANs are flexible enough to reduce the differences between the segmentation result and the ground truth, and further improve the accuracy of the semantic segmentation results without manual labeling in some cases BIB004 . As for using GANs for semantic segmentation tasks, the typical methods are Pix2Pix BIB005 and CycleGAN BIB009 . The semantic segmentation result for Cycle-GAN is shown in Fig. 5 . There are several variants based on Pix2Pix and CycleGAN, such as BIB010 , BIB011 , BIB015 , etc. These methods do not only achieve satisfactory results in image style transfer, but also work well in semantic segmentation. Most of the adversarial domain adaptive semantic segmentation methods for subsequent improvements of CycleGAN and Pix2Pix improve the training stability and transferability by improving loss functions or network layers. Hong et al. BIB016 proposed a method based on cGAN for semantic segmentation. The network integrated cGAN into the FCN framework to reduce the gap between source and target domains. In practical tasks, objects often appear in an occluded state, which brings great challenges to the perception tasks of autonomous systems. To solve this problem, Ehsani et al. BIB017 proposed SeGAN, that jointly generated the appearance and segmentation mask for invisible and visible regions of objects. Different from global alignment strategies like CycleGAN, Luo et al. BIB022 further considered a joint distribution at the category-level. They proposed a category-level adversarial network (CLAN) to enhance local semantic consistency in the case of global feature alignment. Note that traditional semantic segmentation methods may suffer from the unsatisfactory quality of imageto-image conversion. Once the image-to-image conversion fails, nothing can be done to obtain satisfactory results in the subsequent stage of semantic segmentation. Li et al. BIB023 tackled this problem by introducing a bidirectional learning framework with self-supervised learning, in which both translation and segmentation adaption models can promote each other in a closed loop. This segmentation adaptation model was trained on both synthetic and real datasets, which improved the segmentation performance of real data sets. In addition, Erkent et al. BIB024 considered a method of semantic segmentation adapted to different weather conditions, which can achieve a satisfactory accuracy for semantic segmentation without the need of labeling the weather conditions of the source or target domain.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Monocular depth estimation using learning-based approaches has become promising in recent years. However, most monocular depth estimators either need to rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic data can resolve most of these issues but introduces the problem of domain bias. This is the inability to apply a model trained on synthetic data to real-world scenarios. With advances in image style transfer and its connections with domain adaptation (Maximum Mean Discrepancy), we take advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data. Experimental results indicate the efficacy of our approach compared to contemporary state-of-the-art techniques. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Supervised depth estimation has achieved high accuracy due to the advanced deep network architectures. Since the groundtruth depth labels are hard to obtain, recent methods try to learn depth estimation networks in an unsupervised way by exploring unsupervised cues, which are effective but less reliable than true labels. An emerging way to resolve this dilemma is to transfer knowledge from synthetic images with ground truth depth via domain adaptation techniques. However, these approaches overlook specific geometric structure of the natural images in the target domain (i.e., real data), which is important for high-performing depth prediction. Motivated by the observation, we propose a geometry-aware symmetric domain adaptation framework (GASDA) to explore the labels in the synthetic data and epipolar geometry in the real data jointly. Moreover, by training two image style translators and depth estimators symmetrically in an end-to-end network, our model achieves better image style transfer and generates high-quality depth maps. The experimental results demonstrate the effectiveness of our proposed method and comparable performance against the state-of-the-art. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Pixel-wise losses, e.g., cross-entropy or L2, have been widely used in structured prediction tasks as a spatial extension of generic image classification or regression. However, its i.i.d. assumption neglects the structural regularity present in natural images. Various attempts have been made to incorporate structural reasoning mostly through structure priors in a cooperative way where co-occurring patterns are encouraged. We, on the other hand, approach this problem from an opposing angle and propose a new framework, Adversarial Structure Matching (ASM), for training such structured prediction networks via an adversarial process, in which we train a structure analyzer that provides the supervisory signals, the ASM loss. The structure analyzer is trained to maximize the ASM loss, or to emphasize recurring multi-scale hard negative structural mistakes among co-occurring patterns. On the contrary, the structured prediction network is trained to reduce those mistakes and is thus enabled to distinguish fine-grained structures. As a result, training structured prediction networks using ASM reduces contextual confusion among objects and improves boundary localization. We demonstrate that our ASM outperforms pixel-wise IID loss or structural prior GAN loss on three different structured prediction tasks: semantic segmentation, monocular depth estimation, and surface normal prediction. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another (e.g., synthetic to real images). The adapted representations often do not capture pixel-level domain shifts that are crucial for dense prediction tasks (e.g., semantic segmentation). In this paper, we present a novel pixel-wise adversarial domain adaptation algorithm. By leveraging image-to-image translation methods for data augmentation, our key insight is that while the translated images between domains may differ in styles, their predictions for the task should be consistent. We exploit this property and introduce a cross-domain consistency loss that enforces our adapted model to produce consistent predictions. Through extensive experimental results, we show that our method compares favorably against the state-of-the-art on a wide variety of unsupervised domain adaptation tasks. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video -- addressing the difficulty of acquiring realistic ground-truth for such tasks. We propose three contributions: 1) we design new loss functions that capture multiple geometric constraints (eg. epipolar geometry) as well as adaptive photometric loss that supports multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated video, and 3) we propose several online refinement strategies that rely on the symmetry of our self-supervised loss in training and testing, in particular optimizing model parameters and/or the output of different tasks, leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches on multiple tasks. We also show good generalization for transfer learning. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> We present a novel method for simultaneous learning of depth, egomotion, object motion, and camera intrinsics from monocular videos, using only consistency across neighboring video frames as supervision signal. Similarly to prior work, our method learns by applying differentiable warping to frames and comparing the result to adjacent ones, but it provides several improvements: We address occlusions geometrically and differentiably, directly using the depth maps as predicted during training. We introduce randomized layer normalization, a novel powerful regularizer, and we account for object motion relative to the scene. To the best of our knowledge, our work is the first to learn the camera intrinsic parameters, including lens distortion, from video in an unsupervised manner, thereby allowing us to extract accurate depth and motion from arbitrary videos of unknown origin at scale. We evaluate our results on the Cityscapes, KITTI and EuRoC datasets, establishing new state of the art on depth prediction and odometry, and demonstrate qualitatively that depth prediction can be learned from a collection of YouTube videos. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> E. Depth estimation <s> Leveraging synthetically rendered data offers great potential to improve monocular depth estimation, but closing the synthetic-real domain gap is a non-trivial and important task. While much recent work has focused on unsupervised domain adaptation, we consider a more realistic scenario where a large amount of synthetic training data is supplemented by a small set of real images with ground-truth. In this setting we find that existing domain translation approaches are difficult to train and offer little advantage over simple baselines that use a mix of real and synthetic data. A key failure mode is that real-world images contain novel objects and clutter not present in synthetic training. This high-level domain shift isn't handled by existing image translation models. ::: Based on these observations, we develop an attentional module that learns to identify and remove (hard) out-of-domain regions in real images in order to improve depth prediction for a model trained primarily on synthetic data. We carry out extensive experiments to validate our attend-remove-complete approach (ARC) and find that it significantly outperforms state-of-the-art domain adaptation methods for depth prediction. Visualizing the removed regions provides interpretable insights into the synthetic-real domain gap. <s> BIB013
Depth estimation is an important task to help autonomous systems understanding the 3D geometry of environments at high-level. A series of classical and learning-based methods were proposed to estimate depth based on motion or stereo images BIB001 , which is computationally expensive. As widely known, due to the lack of complete scene 3D information, estimating the depth from a single image is an ill-posed task BIB003 . For monocular depth estimation tasks, a series of traditional DL-based algorithms without adversarial learning have been proposed to improve the accuracy of the model. However, considering that it is expensive to collect well-annotated datasets in depth estimation tasks, it is appealing to use adversarial learning methods, like GANs, to achieve domain adaptation from synthetic datasets to realworld images. In addition, the adaptive method is used to improve the transferability of the model, so that the model trained on the synthetic dataset can be well transferred to the real-world images. Here, we will introduce traditional DL-based depth estimation frameworks, as well as introduce methods to improve the transferability of depth estimation models by introducing adversarial learning. Traditional DL-based depth estimation. Traditional DLbased depth estimation methods mainly focus on improving the accuracy of the results by using deep convolution frameworks. Eigen et al. BIB002 first proposed using a neural network to estimate depth from a single image in an end-toend manner, which pioneeringly showed that it is promising for neural networks to estimate the depth from a single image. This framework consists of two components: the first one roughly estimated the global depth structure, and the second one refined this global prediction using local information. Considering the continuous property of the monocular depth value, depth estimation is transformed into a learning problem of continuous a conditional random field (CRF). Liu et al. BIB003 presented a deep convolutional neural field model for single monocular depth estimation, which combined deep CNN and continuous CRF. This method achieved good results on both indoor and outdoor datasets. In order to reduce the dependence on the supervised signal and improve the transferability between different domains, unsupervised domain adaptation methods were presented for depth estimation in BIB006 . GANs-based depth estimation. For the depth estimation task, it is too expensive to collect well-annotated image datasets. An appealing alternative is to use the unsupervised domain adaptation method via GANs to achieve domain adaptation from synthetic datasets to real-world images. Atapour-Abarghouei et al. BIB007 took advantage of the adversarial domain adaptation to train a depth estimation model in a synthetic city environment and transferred it to the real scene. The framework consists of two stages. At the first stage, a depth estimation model is trained with the dataset captured in the virtual environment. At the second stage, the proposed method transfers synthetic style images into real-world ones to reduce the domain discrepancy. Although this method considers the transfer of synthetic city environment to the real-world scene, it ignores the specific geometric structure of the image in the target domain, which is important for improving the accuracy of depth estimation. Motivated by this problem, Zhao et al. BIB008 proposed a geometry-aware symmetric domain adaptation network (GASDA), which produces high-quality results for both image style transfer and depth estimation. GASDA is based on CycleGAN BIB005 , which performs translations for both synthetic-realistic and realisticsynthetic simultaneously with a geometric consistency loss of real stereo images. Zhao et al. BIB013 further considered highlevel domain transformation, that is, mixing a large number of synthetic images with a small amount of real-world images. They proposed the attend-remove-complete (ARC) method, which learns to attend, remove and complete some challenging regions. The ARC method can ultimately make good use of synthetic data to generate accurate depth estimates. Depth estimation via joint tasks learning. Each pixel in one image usually contains surface normal orientation vector information and semantic labels, and both surface normal, semantic segmentation and depth estimation are related to the geometry of objects, which makes it possible to train different structured prediction tasks in a consistent manner. Eigen et al. BIB004 developed a more general network for depth estimation and applied it to other computer vision tasks, such as surface normal estimation and per-pixel semantic labeling. Eigen et al. used the same framework for depth estimation, surface normal estimation and semantic segmentation at the same time, which improved the framework of BIB002 by considering a third scale at a higher resolution. To improve the transferability of the model, Hwang et al. BIB009 proposed adversarial structure matching (ASM), which trains a structured prediction network through an adversarial process. This method achieved ideal results on monocular depth estimation, semantic segmentation and surface normal prediction. To improve accuracy, Chen et al. BIB010 embedded the pixel-level domain adaptation into the depth estimation task. Specifically, they proposed CrDoCo, a pixel-level adversarial domain adaptive algorithm for dense prediction tasks. The core idea of this method is that although the image styles of two domains may be different during the domain transfer process, the task predictions (e.g., depth estimation) should be exactly the same. Since CrDoCo is a pixel-level framework for dense prediction, it can be applied to semantic segmentation, depth prediction, and optical flow estimation, as shown in Fig. 5 . Some other developments in considering optical flow, camera pose and intrinsic parameters from monocular video for depth estimation can be found in BIB011 . By considering the intrinsic parameters of the camera similar to BIB012 , accurate depth information can be extracted from any video.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> In this paper, we address the problem of detecting pedestrians in crowded real-world scenes with severe overlaps. Our basic premise is that this problem is too difficult for any type of model or feature alone. Instead, we present an algorithm that integrates evidence in multiple iterations and from different sources. The core part of our method is the combination of local and global cues via probabilistic top-down segmentation. Altogether, this approach allows examining and comparing object hypotheses with high precision down to the pixel level. Qualitative and quantitative results on a large data set confirm that our method is able to reliably detect pedestrians in crowded scenes, even when they overlap and partially occlude each other. In addition, the flexible nature of our approach allows it to operate on very small training sets. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Label estimation is an important component in an unsupervised person re-identification (re-ID) system. This paper focuses on cross-camera label estimation, which can be subsequently used in feature learning to learn robust re-ID models. Specifically, we propose to construct a graph for samples in each camera, and then graph matching scheme is introduced for cross-camera labeling association. While labels directly output from existing graph matching methods may be noisy and inaccurate due to significant cross-camera variations, this paper proposes a dynamic graph matching (DGM) method. DGM iteratively updates the image graph and the label estimation process by learning a better feature space with intermediate estimated labels. DGM is advantageous in two aspects: 1) the accuracy of estimated labels is improved significantly with the iterations; 2) DGM is robust to noisy initial training data. Extensive experiments conducted on three benchmarks including the large-scale MARS dataset show that DGM yields competitive performance to fully supervised baselines, and outperforms competing unsupervised learning methods. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identity-discriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a "learning via translation" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Being a cross-camera retrieval task, person re-identification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camera style disparities. Specifically, with CycleGAN, labeled training images can be style-transferred to each camera, and, along with the original training samples, form the augmented training set. This method, while increasing data diversity against over-fitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few-camera systems in which over-fitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of over-fitting. We also report competitive accuracy compared with the state of the art. Code is available at: https://github.com/zhunzhong07/CamStyle <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Abstract We study the problem of unsupervised domain adaptive re-identification (re-ID) which is an active topic in computer vision but lacks a theoretical foundation. We first extend existing unsupervised domain adaptive classification theories to re-ID tasks. Concretely, we introduce some assumptions on the extracted feature space and then derive several loss functions guided by these assumptions. To optimize them, a novel self-training scheme for unsupervised domain adaptive re-ID tasks is proposed. It iteratively makes guesses for unlabeled target data based on an encoder and trains the encoder based on the guessed labels. Extensive experiments on unsupervised domain adaptive person re-ID and vehicle re-ID tasks with comparisons to the state-of-the-arts confirm the effectiveness of the proposed theories and self-training framework. Our code is available on https://github.com/LcDog/DomainAdaptiveReID . <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Recent deep learning based person re-identification approaches have steadily improved the performance for benchmarks, however they often fail to generalize well from one domain to another. In this work, we propose a novel adaptive transfer network (ATNet) for effective cross-domain person re-identification. ATNet looks into the essential causes of domain gap and addresses it following the principle of "divide-and-conquer". It decomposes the complicated cross-domain transfer into a set of factor-wise sub-transfers, each of which concentrates on style transfer with respect to a certain imaging factor, e.g., illumination, resolution and camera view etc. An adaptive ensemble strategy is proposed to fuse factor-wise transfers by perceiving the affect magnitudes of various factors on images. Such "decomposition-and-ensemble" strategy gives ATNet the capability of precise style transfer at factor level and eventually effective transfer across domains. In particular, ATNet consists of a transfer network composed by multiple factor-wise CycleGANs and an ensemble CycleGAN as well as a selection network that infers the affects of different factors on transferring each image. Extensive experimental results on three widely-used datasets, i.e., Market-1501, DukeMTMC-reID and PRID2011 have demonstrated the effectiveness of the proposed ATNet with significant performance improvements over state-of-the-art methods. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> F. Person re-ID <s> Variations in visual factors such as viewpoint, pose, illumination and background, are usually viewed as important challenges in person re-identification (re-ID). In spite of acknowledging these factors to be influential, quantitative studies on how they affect a re-ID system are still lacking. To derive insights in this scientific campaign, this paper makes an early attempt in studying a particular factor, viewpoint. We narrow the viewpoint problem down to the pedestrian rotation angle to obtain focused conclusions. In this regard, this paper makes two contributions to the community. First, we introduce a large-scale synthetic data engine, PersonX. Composed of hand-crafted 3D person models, the salient characteristic of this engine is “controllable”. That is, we are able to synthesize pedestrians by setting the visual variables to arbitrary values. Second, on the 3D data engine, we quantitatively analyze the influence of pedestrian rotation angle on re-ID accuracy. Comprehensively, the person rotation angles are precisely customized from 0 to 360, allowing us to investigate its effect on the training, query, and gallery sets. Extensive experiment helps us have a deeper understanding of the fundamental problems in person re-ID. Our research also provides useful insights for dataset building and future practical usage, e.g., a person of a side view makes a better query. <s> BIB011
In recent years, pedestrian detection has been widely taken into account in autonomous systems, especially for autonomous driving and robot movement BIB001 , BIB002 . A similar while more difficult task than pedestrian detection, person re-identification (re-ID), requires matching pedestrians in disjoint camera views. At present, there are several learningbased methods focusing on person re-ID BIB003 , BIB004 , BIB005 . However, these methods have poor transferability, that is, the person re-ID models trained on one domain usually fail to generalize well to another domain. Considering that Cycle-GAN shows great results in transferability using unpaired images, Deng et al. BIB006 introduced the similarity preserving cycle-consistent generative adversarial network (SPGAN), an unsupervised domain adaptation approach to generate samples while do not only have the target domain style but also preserve the underlying ID information. This method showed that applying domain adaptation to person re-ID can achieve competitive accuracy. Taking into account the data augmentation of different cameras, Zhong et al. BIB007 introduced the camera style (CamStyle) adaptation. CamStyle smooths disparities in camera styles, transferring labeled training image styles to each camera to augment the training set. CamStyle helps to learn pedestrian descriptors through camera-invariant property to improve re-ID experimental accuracy. The above approaches, like SPGAN BIB006 and CamStyle BIB007 , treated the domain gap as a black box and attempted to solve it by using a single style transformer. Liu et al. BIB009 proposed a Fig. 6 : UAV navigation via DRL algorithm for indoor flying, which is entirely trained in a simulated 3D CAD model and generalized to real indoor flight environment . novel adaptive transfer network (ATNet), which investigates the root causes of the domain gap. ATNet realizes the domain transfer of person re-ID by decomposing complicated crossdomain transfers and transferring features through sub-GANs separately. Recently, Song et al. BIB008 theoretically analyzed unsupervised domain adaptation re-ID tasks, which bridges the gap between theories of unsupervised domain adaptation and re-ID task. Recent developments in person re-ID also involved considering occluded parts BIB010 and different visual factors such as viewpoint, pose, illumination, and background BIB011 .
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed "Actor-Mimic", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor feature based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Landing an unmanned aerial vehicle (UAV) on a ground marker is an open problem despite the effort of the research community. Previous attempts mostly focused on the analysis of hand-crafted geometric features and the use of external sensors in order to allow the vehicle to approach the land-pad. In this article, we propose a method based on deep reinforcement learning that only requires low-resolution images taken from a down-looking camera in order to identify the position of the marker and land the UAV on it. The proposed approach is based on a hierarchy of Deep Q-Networks (DQNs) used as high-level control policy for the navigation toward the marker. We implemented different technical solutions, such as the combination of vanilla and double DQNs, and a partitioned buffer replay. Using domain randomization we trained the vehicle on uniform textures and we tested it on a large variety of simulated and real-world environments. The overall performance is comparable with a state-of-the-art algorithm and human pilots. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. ::: We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. ::: The supplementary video can be accessed at the following link: this https URL <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Exploration is a fundamental challenge in reinforcement learning (RL). Many of the current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we explore how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm -- model agnostic exploration with structured noise (MAESN) -- to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation. <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation ("I am here") and a representation of the goal ("I am going there"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. We present an interactive navigation environment that uses Google StreetView for its photographic content and worldwide coverage, and demonstrate that our learning method allows agents to learn to navigate multiple cities and to traverse to target destinations that may be kilometres away. The project webpage http://streetlearn.cc contains a video summarising our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at https://github.com/deepmind/streetlearn <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> Learning is an inherently continuous phenomenon. When humans learn a new task there is no explicit distinction between training and inference. As we learn a task, we keep learning about it while performing the task. What we learn and how we learn it varies during different stages of learning. Learning how to learn and adapt is a key property that enables us to generalize effortlessly to new settings. This is in contrast with conventional settings in machine learning where a trained model is frozen during inference. In this paper we study the problem of learning to learn at both training and test time in the context of visual navigation. A fundamental challenge in navigation is generalization to unseen scenes. In this paper we propose a self-adaptive visual navigation method (SAVN) which learns to adapt to new environments without any explicit supervision. Our solution is a meta-reinforcement learning approach where an agent learns a self-supervised interaction loss that encourages effective navigation. Our experiments, performed in the AI2-THOR framework, show major improvements in both success rate and SPL for visual navigation in novel scenes. Our code and data are available at: https://github.com/allenai/savn. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> G. Robot Navigation <s> While robot learning has demonstrated promising results for enabling robots to automatically acquire new skills, a critical challenge in deploying learning-based systems is scale: acquiring enough data for the robot to effectively generalize broadly. Imitation learning, in particular, has remained a stable and powerful approach for robot learning, but critically relies on expert operators for data collection. In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation. To accomplish this, we cast the problem of imitation with autonomous improvement into a multi-task setting. We utilize the insight that, in a multi-task setting, a failed attempt at one task might represent a successful attempt at another task. This allows us to leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted. Using an initial dataset of multi-task demonstration data, the robot autonomously collects trials which are only sparsely labeled with a binary indication of whether the trial accomplished any useful task or not. We then embed the trials into a learned latent space of tasks, trained using only the initial demonstration dataset, to draw similarities between various trials, enabling the robot to achieve one-shot generalization to new tasks. In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement, and in contrast to reinforcement learning algorithms, our method can effectively improve from sparse, task-agnostic reward signals. <s> BIB017
Recently, robot navigation is a key and hot topic in autonomous systems especially considering whether the trained model can accurately learn the task feature or successfully transfer the previous information to new tasks. A variety of RL and meta-learning methods, such as DQN BIB006 , LSTM structure BIB003 and MAML BIB011 , etc, can accurately or transferably handle the changes arising from the environment or task when using the previously trained model. As shown in Table III , in robot and UAV navigation issues, RL methods tend to focus on transferring tasks from simulation to real world, while meta-learning methods usually have satisfactory transferability on complex tasks by means of extracting or memorizing previous training data in simulation. RL-based robot navigation. In order to improve training efficiency and accuracy, dividing a single task to several subtasks and training them separately is a solution. Polvara et al. BIB007 proposed two distinct DQNs, called double DQNs, which were used to train two sub-tasks: landmark detection and vertical landing, respectively. Due to the separate training of each single task at the same time, training efficiency and accuracy were improved to an extent. Moreover, training the model with various auxiliary tasks, such as pixel control BIB015 , reward prediction BIB004 and value function replay BIB005 , will also help the robot adapting to the target faster and more accurate. In order to equip the model with a better transferability when encountering a new situation, tasks features BIB001 , BIB006 and training policies BIB012 can be transferred to novel tasks in the same domain or across domains. Parisotto et al. BIB001 and Rusu et al. [53] transferred useful features among different ATARI games and then the corresponding features were utilized to train a new ATARI game in the same domain. In addition, when dealing with the tasks whose trials in the real world are usually time-consuming or expensive, the characteristic of tasks can be transferred to cross-domain effectively. Zhang et al. BIB006 put forward a shared DQN between tasks in order to learn informative features of tasks, which can be transferred from simulation to real world. Similarly, as shown in Fig. 6 , Sadeghi et al. proposed a novel realistic translation network, which transforms virtual image inputs into real images with a similar scene structure. Moreover, policies can be transferred from simulation to simulation. Similar to BIB007 , the primary training policy of can be divided into several secondary policies, which acquire certain behaviors. Then these behaviors are combined to train the primary policy, which helps to make the primary policy more transferable across domains. Chen et al. BIB012 used AC networks to train the secondary policies as well as the primary policy. In navigation, the primary behavior learned by a high-degreefreedom robot is to navigate straightly to the target with a sample environment. Then Chen et al. randomized the nonessential aspects of every secondary behavior, such as the appearance, the positions and the number of obstacles in the scene to improve generalization ability of the final policy. Due to the sampling constraints of model-free RL methods and transferring limit of model-based RL methods as mentioned in Section II, it is difficult to equip a model with good transferability and sampling efficiency at the same time. An easy way to handle this contradiction is to combine modelfree methods with model-based methods. Kahn et al. BIB013 used a generalized computation graph to find the navigation policies from scratch by inserting specific instantiations between model-free and model-based ones. Therefore, the algorithm not only learns high-dimensional tasks but also has promising sampling efficiency. Meta-learning-based robot navigation. RL-based methods tend to need sufficient training data in order to acquire transferability. When a new task has insufficient data during training and testing, meta-learning methods can also promote the model to be more transferable across domains. Firstly, TABLE III: Summary of traditional RL/meta-learning methods for scenario-transfer tasks. (We classify the meta-learning methods into several classes. "A" represents recurrent network. "B" represents metric network. "C" represents MAML. "D" represents meta-imitation learning. "E" represents meta-RL. Similarly, we classify the RL methods into several classes. "F" represents Fitted Q-iteration. "G" represents soft Q-learning. "H" represents DQN. "I" represents DDPG. "J" represents soft AC. "K" represents A3C. "L" represents GPS. "M" represents asynchronous NAF (normalized advantage function) . "N" represents PPO (proximal policy optimization) BIB008 . "O" represents TRPO (trust region policy optimization) BIB002 . "P" represents DPP.) Singh et al. BIB017 Robotic manipulation D Bullet physics engine None recurrent models, like the LSTM structure, weaken the longterm dependency of sequential data, which acts as an optimizer to learn an optimization method for the gradient descent models. Mirowski et al. BIB014 proposed a multi-city navigation network with LSTM structure. The main task of the LSTM structure was used to encode and encapsulate region-specific features and structures in order to add multiple paths in each city. After training in multiple cities, it was proved that the network is sufficiently versatile. Moreover, metric learning can be utilized to extract image information and generalize the specific information, which is helpful in navigation. Zhu et al. BIB009 combined siamese networks with AC networks to navigate the robot to the target only with 3D images. A siamese network captures and compares the special characteristics from the observation image and target image. Then, the joint representation of images is kept in scene-specific layers. AC networks use the features in scene-specific layers to generate policy and value outputs in navigation. To sum up, the deep siamese AC networks share parameters across different tasks and domains so that the model can be generalized across targets and scenes. Even if the models trained by the two meta-learning methods above acquire both accuracy and transferability, when the models encounter new cross-domain tasks, they also need a plenty of data to be retrained. In order to fine-tune a new model with few data, MAML is a good way of thought. In BIB010 , it was verified that MAML performs well in 2D navigation and locomotion simulation, compared with traditional policy gradient algorithms. It is shown that MAML could learn a model that adapts much more quickly in a single gradient update, while it continues to improve with additional updates without overfitting. When the training process is unsupervised, MAML is not applicable and needs to be adjusted, such as constructing a reward function during meta-training process and labeling data using clustering methods , etc. In BIB016 , Wortsman et al. proposed a self-adaptive visual navigation (SAVN) method derived from MAML to learn adapting to new environments without any supervision. Specifically, SAVN optimizes two objective functions: self-supervised interaction loss and navigation loss. During training, the interaction-and navigation-gradients are back-propagated through the network, and the parameters of the self-supervised loss are updated at the end of each episode using navigation-gradients, which is trained by MAML. During testing, the parameters of the interaction loss remain fixed, while the rest of the network is updated using interaction-gradients. Therefore, the model equips the MAML methods with good transferability in a no supervision environment.
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available. <s> BIB001 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> From exploring planets to cleaning homes, the reach and versatility of robotics is vast. The integration of actuation, sensing and control makes robotics systems powerful, but complicates their simulation. This paper introduces a versatile, scalable, yet powerful general-purpose robot simulation framework called V-REP. The paper discusses the utility of a portable and flexible simulation framework that allows for direct incorporation of various control techniques. This renders simulations and simulation models more accessible to a general-public, by reducing the simulation model deployment complexity. It also increases productivity by offering built-in and ready-to-use functionalities, as well as a multitude of programming approaches. This allows for a multitude of applications including rapid algorithm development, system verification, rapid prototyping, and deployment for cases such as safety/remote monitoring, training and education, hardware control, and factory automation simulation. <s> BIB002 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images. <s> BIB003 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuo-motor policies for real robotic systems without relying entirely on large real-world robot datasets. <s> BIB004 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem. <s> BIB005 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth. <s> BIB006 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Abstract In the absence of external guidance, how can a robot learn to map the many raw pixels of high-dimensional visual inputs to useful action sequences? We propose here Continual Curiosity driven Skill Acquisition (CCSA). CCSA makes robots intrinsically motivated to acquire, store and reuse skills. Previous curiosity-based agents acquired skills by associating intrinsic rewards with world model improvements, and used reinforcement learning to learn how to get these intrinsic rewards. CCSA also does this, but unlike previous implementations, the world model is a set of compact low-dimensional representations of the streams of high-dimensional visual information, which are learned through incremental slow feature analysis. These representations augment the robot's state space with new information about the environment. We show how this information can have a higher-level (compared to pixels) and useful interpretation, for example, if the robot has grasped a cup in its field of view or not. After learning a representation, large intrinsic rewards are given to the robot for performing actions that greatly change the feature output, which has the tendency otherwise to change slowly in time. We show empirically what these actions are (e.g., grasping the cup) and how they can be useful as skills. An acquired skill includes both the learned actions and the learned slow feature representation. Skills are stored and reused to generate new observations, enabling continual acquisition of complex skills. We present results of experiments with an iCub humanoid robot that uses CCSA to incrementally acquire skills to topple, grasp and pick-place a cup, driven by its intrinsic motivation from raw pixel vision. <s> BIB007 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. ::: We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. <s> BIB008 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our experiments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration. <s> BIB009 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations. <s> BIB010 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. ::: Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. ::: Videos available at this https URL . <s> BIB011 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training. <s> BIB012 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions. ::: Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update. <s> BIB013 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. ::: We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. ::: The supplementary video can be accessed at the following link: this https URL <s> BIB014 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor feature based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation. <s> BIB015 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> We consider the problem of learning multi-stage vision-based tasks on a real robot from a single video of a human performing the task, while leveraging demonstration data of subtasks with other objects. This problem presents a number of major challenges. Video demonstrations without teleoperation are easy for humans to provide, but do not provide any direct supervision. Learning policies from raw pixels enables full generality but calls for large function approximators with many parameters to be learned. Finally, compound tasks can require impractical amounts of demonstration data, when treated as a monolithic skill. To address these challenges, we propose a method that learns both how to learn primitive behaviors from video demonstrations and how to dynamically compose these behaviors to perform multi-stage tasks by"watching"a human demonstrator. Our results on a simulated Sawyer robot and real PR2 robot illustrate our method for learning a variety of order fulfillment and kitchen serving tasks with novel objects and raw pixel inputs. <s> BIB016 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Skilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors amid challenging cases of clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after only a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.cs.princeton.edu <s> BIB017 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping. Model-free deep reinforcement learning (RL) has been successfully applied to a range of challenging environments, but the proliferation of algorithms makes it difficult to discern which particular approach would be best suited for a rich, diverse task like grasping. To answer this question, we propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects. Off-policy learning enables utilization of grasping data over a wide variety of objects, and diversity is important to enable the method to generalize to new objects that were not seen during training. We evaluate the benchmark tasks against a variety of Q-function estimation methods, a method previously proposed for robotic grasping with deep neural network models, and a novel approach based on a combination of Monte Carlo return estimation and an off-policy correction. Our results indicate that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning, and our analysis of stability sheds light on the relative tradeoffs between the algorithms. <s> BIB018 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96% grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations. <s> BIB019 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> We propose MAD-GAN, an intuitive generalization to the Generative Adversarial Networks (GANs) and its conditional variants to address the well known problem of mode collapse. First, MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample. Intuitively, to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. We perform extensive experiments on synthetic and real datasets and compare MAD-GAN with different variants of GAN. We show high quality diverse sample generations for challenging tasks such as image-to-image translation and face generation. In addition, we also show that MAD-GAN is able to disentangle different modalities when trained using highly challenging diverse-class dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the end, we show its efficacy on the unsupervised feature representation task. In Appendix, we introduce a similarity based competing objective (MAD-GAN-Sim) which encourages different generators to generate diverse samples based on a user defined similarity metric. We show its performance on the image-to-image translation, and also show its effectiveness on the unsupervised feature representation task. <s> BIB020 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Monocular depth estimation using learning-based approaches has become promising in recent years. However, most monocular depth estimators either need to rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic data can resolve most of these issues but introduces the problem of domain bias. This is the inability to apply a model trained on synthetic data to real-world scenarios. With advances in image style transfer and its connections with domain adaptation (Maximum Mean Discrepancy), we take advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data. Experimental results indicate the efficacy of our approach compared to contemporary state-of-the-art techniques. <s> BIB021 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset. <s> BIB022 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Vision-language navigation (VLN) is the task of navigating an embodied agent to carry out natural language instructions inside real 3D environments. In this paper, we study how to address three critical challenges for this task: the cross-modal grounding, the ill-posed feedback, and the generalization problems. First, we propose a novel Reinforced Cross-Modal Matching (RCM) approach that enforces cross-modal grounding both locally and globally via reinforcement learning (RL). Particularly, a matching critic is used to provide an intrinsic reward to encourage global matching between instructions and trajectories, and a reasoning navigator is employed to perform cross-modal grounding in the local visual scene. Evaluation on a VLN benchmark dataset shows that our RCM model significantly outperforms previous methods by 10% on SPL and achieves the new state-of-the-art performance. To improve the generalizability of the learned policy, we further introduce a Self-Supervised Imitation Learning (SIL) method to explore unseen environments by imitating its own past, good decisions. We demonstrate that SIL can approximate a better and more efficient policy, which tremendously minimizes the success rate performance gap between seen and unseen environments (from 30.7% to 11.7%). <s> BIB023 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> We propose a new perspective on representation learning in reinforcement learning based on geometric properties of the space of value functions. We leverage this perspective to provide formal evidence regarding the usefulness of value functions as auxiliary tasks. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We demonstrate that using value functions as auxiliary tasks corresponds to an expected-error relaxation of our formulation, with AVFs a natural candidate, and identify a close relationship with proto-value functions (Mahadevan, 2005). We highlight characteristics of AVFs and their usefulness as auxiliary tasks in a series of experiments on the four-room domain. <s> BIB024 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Learning is an inherently continuous phenomenon. When humans learn a new task there is no explicit distinction between training and inference. As we learn a task, we keep learning about it while performing the task. What we learn and how we learn it varies during different stages of learning. Learning how to learn and adapt is a key property that enables us to generalize effortlessly to new settings. This is in contrast with conventional settings in machine learning where a trained model is frozen during inference. In this paper we study the problem of learning to learn at both training and test time in the context of visual navigation. A fundamental challenge in navigation is generalization to unseen scenes. In this paper we propose a self-adaptive visual navigation method (SAVN) which learns to adapt to new environments without any explicit supervision. Our solution is a meta-reinforcement learning approach where an agent learns a self-supervised interaction loss that encourages effective navigation. Our experiments, performed in the AI2-THOR framework, show major improvements in both success rate and SPL for visual navigation in novel scenes. Our code and data are available at: https://github.com/allenai/savn. <s> BIB025 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> Autopilot systems are typically composed of an “inner loop” providing stability and control, whereas an “outer loop” is responsible for mission-level objectives, such as way-point navigation. Autopilot systems for unmanned aerial vehicles are predominately implemented using Proportional-Integral-Derivative (PID) control systems, which have demonstrated exceptional performance in stable environments. However, more sophisticated control is required to operate in unpredictable and harsh environments. Intelligent flight control systems is an active area of research addressing limitations of PID control most recently through the use of reinforcement learning (RL), which has had success in other applications, such as robotics. Yet previous work has focused primarily on using RL at the mission-level controller. In this work, we investigate the performance and accuracy of the inner control loop providing attitude control when using intelligent flight control systems trained with state-of-the-art RL algorithms—Deep Deterministic Policy Gradient, Trust Region Policy Optimization, and Proximal Policy Optimization. To investigate these unknowns, we first developed an open source high-fidelity simulation environment to train a flight controller attitude control of a quadrotor through RL. We then used our environment to compare their performance to that of a PID controller to identify if using RL is appropriate in high-precision, time-critical flight control. <s> BIB026 </s> When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey <s> H. Robotic Manipulation <s> While robot learning has demonstrated promising results for enabling robots to automatically acquire new skills, a critical challenge in deploying learning-based systems is scale: acquiring enough data for the robot to effectively generalize broadly. Imitation learning, in particular, has remained a stable and powerful approach for robot learning, but critically relies on expert operators for data collection. In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation. To accomplish this, we cast the problem of imitation with autonomous improvement into a multi-task setting. We utilize the insight that, in a multi-task setting, a failed attempt at one task might represent a successful attempt at another task. This allows us to leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted. Using an initial dataset of multi-task demonstration data, the robot autonomously collects trials which are only sparsely labeled with a binary indication of whether the trial accomplished any useful task or not. We then embed the trials into a learned latent space of tasks, trained using only the initial demonstration dataset, to draw similarities between various trials, enabling the robot to achieve one-shot generalization to new tasks. In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement, and in contrast to reinforcement learning algorithms, our method can effectively improve from sparse, task-agnostic reward signals. <s> BIB027
In this section, we will focus on transferability in robotic manipulation issues due to the practicality of robotic. RL methods enable the robotic to transfer between different environments and tasks by means of special inputs BIB007 and reformed training networks BIB008 , etc. Moreover, metalearning and imitation learning can be utilized to handle difficult tasks with few or even one demonstration during meta testing process in the same domain or across domains BIB009 , BIB016 , in order to speed up the learning process and transfer previous task features. In Table III , we summarize RL methods and meta learning methods to handle domain-transfer robotic manipulation problems. As for experiment platform, MuJoCo (multi-joint dynamics with contact) BIB001 simulation platform is popular because of its sufficient and realistic environmental information. Moreover, compared with RL methods, metalearning is capable of training with fewer data and adapting to new tasks faster to acquire the model transferability. RL-based robotic manipulation. When considering improving the transferability of robotic systems, synthetic data as input BIB003 and separate networks in training BIB004 are possible RL-based solutions. Synthetic inputs help to transfer experience learned from different settings in simulation to real world. Zhang et al. BIB003 were the first to learn controlling a three-joint robot arm via DQN merely from raw-pixel images without any prior knowledge. The robot arm reaches the target in real world successfully, only when it takes synthetic images that generated by the 2D simulator as inputs according to real-time joint angles. Therefore, the input of synthetic images inevitably offsets the gap between the simulation and real world, thereby improving the transferability. Moreover, when the data is limited and unable to be synthesized, DQN can be divided into perception and control modules, which are trained separately. Then, the perception skills and the controller obtained from simulation will be transferable BIB004 . Similarly, DQN can also train several networks and combine the experience learned together. Zeng et al. BIB017 used DQN to jointly train two fully convolutional networks mapping from visual observations to actions. The experience will transfer between robot pushing and grasping processes, and thus these synergies are learned. To make a comparison of some popular RL methods focusing on generalization ability in robotic manipulation, Quillen et al. BIB018 evaluated simulated benchmark tasks, in which robot arms were used to grasp random targets in comparison with some DRL algorithms, such as double Q-learning (DQL), DDPG, path consistency learning (PCL), Monte Carlo (MC) policy evaluation. In the experiment, the trained robot arms coped with grasping unseen targets. The results revealed that DQL performs better than other algorithms in low-data regimes, and has a relatively higher robustness to the choice of hyperparameters. When data is becoming plentiful, MC policy evaluation achieves a slightly better performance. MAML-based robotic manipulation. However, in robotic manipulation issues, traditional RL methods tend to need a plenty of training data. Even if they can transfer to new tasks or domains, they also have poor generalization ability BIB010 , BIB019 . MAML combined with imitation learning, is able to utilize past experience across different tasks or domains, which can learn a new skill from a very small number of demonstration in various fields of application. Duan et al. BIB011 let the robot arm demonstrate itself in simulation, that is, the input and output samples were collected by the robot arm itself. The inputs of the model are the position information of each block rather than images or videos. They first sampled a demonstration from one of the training tasks. Then, they sampled another pair of observation and action from a second demonstration of the same task. Considering both the first demonstration and second observation, the network was trained to output the corresponding action. In manipulation network, the soft attention structure allows the model to generalize to conditions and tasks that are invisible in training data. Then, Finn et al. BIB009 used visual inputs from raw pixels as demonstration. The model requires data from significantly fewer prior demonstrations in training and merely one demonstration in testing to learn new skills effectively. Moreover, it does not only perform well in simulation but also works in real robotic system. MAML is modified to twohead architecture, which means that the algorithm is flexible for both learning to adapt policy parameters and learning the expert demonstration. Therefore, the number of demonstrations needed for an individual task is reduced by sharing the data across tasks. Taking robot arm pushing as an example, during the training process, the robot arm can see various pushing demonstrations, which contain different objects and each object may have different quality and friction, etc. In the testing process, the robot arm needs to push the object that has never seen during training. It needs to learn which object to push and how to push it according to merely one demonstration. As shown in Fig. 7, compared with [108] , Yu et al. BIB016 increased the difficulty of imitation learning, that is, only using a single video demonstration from a human as input and the robot arm needs to accomplish the same work as BIB009 by domain-adaption. The authors put forward a domainadaptive meta-learning method that transfers the demonstration data from human demonstrations to robot arm demonstrations. MAML was utilized to deal with the setting of learning from video demonstrations of humans. Due to the clone of behavior across the domain, the loss function also needs to be reconstructed and TCN is used to construct the loss network in MAML structure in the robotic domain. Specifically, the robot arm will learn a set of initial parameters in the video domain, then after one or a few steps of gradient descent on merely one human demonstration, the robot arm is able to perform the new task effectively. Recently, on the basis of BIB009 , BIB027 improved the one-shot imitation model by using additional autonomously-collected data instead of manually collecting data. It is novel that they put forward an embedding network to distinguish whether two demo embeddings are close to each other. By the use of metric learning, they compute the Euclidean distance to find the distance between two videos. If they are close, it is regarded that the demonstrations fall into the same task. Therefore, the demonstrations from the same task are viewed as autonomously-collected data that can be used to be trained in different tasks. IV. DISCUSSION This review shows the powerful effects of traditional DL, adversarial learning, RL and meta-learning on complex visual and control tasks in autonomous systems. In particular, some traditional DL-based and RL-based methods may not guarantee the accuracy when transferred to another domain, however, adversarial learning and meta-learning are able to treat accuracy and transferability well. Although adversarial learning, like GANs, produce better, clearer, and more transferable results than other traditional DL-based methods; Metalearning methods or combining them with RL and imitation learning methods tend to be equipped with an efficiency or transferability or both of them. There are still important challenges and future works worth our attention. In this section, we summarize some trends and challenges for autonomous systems. • GANs with good stability, quick convergence and controllable mode. GANs employ the gradient descent method to iterate the generator and discriminator to solve the minimax game problem. In the game, the mutual game between the generator and discriminator may cause that model training unstable, difficult to converge, and even mode collapses. Although there are some preliminary studies aiming at improving these deficiencies of GANs BIB005 , BIB020 , there is still much room for improvement in terms of the modal diversity and realtime performance. In addition, controlling the mode of data enhancement is still an open question. How to make the generated data mode controllable by controlling additional conditions and keep the model stable, and to achieve purposeful data enhancement, in particular for the computer vision tasks in autonomous systems, is an interesting direction in the future. • GANs for complex multi-task. Although GANs have achieved great results in some typical computer vision tasks of autonomous systems, it still remains difficult to consider the development of more complex multitask in the future. Since some visual tasks are often related to each other, this phenomenon makes it possible to seamlessly reuse supervision between related tasks or solve different tasks in one system without adding complexity . For example, it is promising to consider training a general-purpose network that can be used for multi-task image restoration in a bad weather condition with only fine-tuning, such as image rain removal, snow removal, dehazing, seasonal change, light adjustment, etc. In addition, in severe rain and fog weather, how to perform image SR while removing rain/dehazing at the same time is challenging. In short, the use of GANs for more complex multi-task remains an open question, and worth exploring. • GANs for more challenging domain adaptation. In autonomous systems, transferability is important for computer vision tasks. Although some results introduce GANs into domain adaption to improve domain transfer BIB021 , BIB012 , there is still much room for development. When considering more diverse domains, more differentiated cross-domain, and cross-style domains, such as road scenarios in different countries, the existing methods often cannot guarantee good transferability among these domains. However, GANs are promising to develop more diverse domain adaptations by showing unprecedented effectiveness in domain transfer. It is interesting to study the further use of GANs for more differentiated crossdomain transferability. • RL for multi-modal, multi-task and multi-agent. Most of RL methods in applications focus primarily on visual input only. However, when considering information from multiple models, such as voice, text, and video, agents can better understand the scenes and the performance in experiments will be more accurate and satisfactory BIB022 , BIB023 . Moreover, in multi-task RL models, the agent is simultaneously trained on both auxiliary tasks and target tasks BIB024 , BIB006 , so that the agent has the ability to transfer experience between tasks. Furthermore, thanks to the distributed nature of the multi-agent, multi-agent RL can achieve learning efficiency from sharing experience, such as communication, teaching and imitation, etc . • Meta-learning for unsupervised tasks. Traditional meta-learning, that is, supervised learning during training and testing, in which both training data and testing data are labelled. However, if we use the unlabeled training data, in other words, there is no reward generated in training, how can we also achieve better results on specific tasks during testing? Leveraging unsupervised embeddings to automatically construct tasks or losses for unsupervised meta-learning is a solution , BIB025 , . After that, the training tasks for meta-learning are constructed. Therefore, meta-learning issues can be transformed into a wider unsupervised application. It is interesting to use unsupervised meta-learning methods in more realistic task distributions so that the agent can explore and adapt to new tasks more intelligently, and the model can solve real-world tasks more effectively. • The application performance of RL and metalearning. In order to deal with the differences between simulation environments and real scenes, the tasks or the networks can be transferred successfully using RL or meta-learning. Chances are that most of the existing algorithms with a good performance in simulation cannot perform as well in real world BIB013 , which limits the applications of the models in simulation. Therefore, contentrich and stable simulation frameworks, like physics engines such as AI2-THOR (the house of interactions) BIB014 , MuJoCo BIB001 , GymFC BIB026 or like synthetic data sets, such as SUNCG BIB015 and like robot operating platforms, such as V-REP (virtual robot experiment platform) BIB002 will help to keep the learned information in more details and stable so that when transferred in real world, the performance is possibly good BIB009 , BIB016 . In the future, more informative simulation environments and more stable real platforms will shorten the gap between simulation and real world, thereby making the model more transferable and accurate. At that time, due to the high similarity between simulation and real world platforms, various high-complexity applications that trained in simulation can be put into practice directly, such as unmanned vehicle and high degree of freedom robotic, etc. V. CONCLUSION In this review, we aim to contribute to the evolution of autonomous systems by exploring the impacts of accuracy and transferability on complex computer vision tasks and decisionmaking problems. To this end, we mainly focus on basic challenging perception and decision-making tasks in autonomous systems, such as image SR, image deblurring/dehazing/rain removal, semantic segmentation, depth estimation, person re-ID, robot navigation and robotic manipulation, etc. We introduce some basic concepts and methods of transfer learning and its special case domain adaptation. Then, we briefly discuss three typical generative adversarial networks, including GAN, cGAN, and CycleGAN. We also present some basic concepts of RL, explain the idea of meta-learning, and discuss the relationship between adversarial learning, RL and meta-learning. Additionally, we analyze some typical DL methods and focus on the powerful performance of GANs in computer vision tasks, discuss RL and meta-learning methods in robot control tasks in both simulation and real-world. Moreover, we provide summary tables of learning-based methods for different tasks in autonomous systems, which include the supervision method, loss function of models and experiment platforms in visual and robot control tasks. Finally, we discuss main challenges and future works from the aspects of perception and decisionmaking of autonomous systems by considering the accuracy and transferability.
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Introduction <s> This paper proposes an efficient method for large-vocabulary continuous-speech recognition, using a compact data structure and an efficient search algorithm. We introduce a very compact data structure DAWG as a lexicon to reduce the search space. We also propose a search algorithm to obtain the N-best hypotheses using the DAWG structure. This search algorithm is composed of two phases: "forward search" and "traceback". Forward search, which basically uses the time-synchronous Viterbi algorithm, merges candidates and stores the information about them in DAWG structures to create phoneme graphs. Traceback traces the phoneme graphs to obtain the N-best hypotheses. An evaluation of this method's performance using a speech-recognition-based telephone-directory-assistance system having a 4000-word vocabulary confirmed that our strategy improves speech recognition in terms of time and recognition rate. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Introduction <s> We combine our earlier approach to context-dependent network representation with our algorithm for determining weighted networks to build optimized networks for large-vocabulary speech recognition combining an n-gram language model, a pronunciation dictionary and context-dependency modeling. While fully-expanded networks have been used before in restrictive settings (medium vocabulary or no cross-word contexts), we demonstrate that our network determination method makes it practical to use fully-expanded networks also in large-vocabulary recognition with full cross-word context modeling. For the DARPA North American Business News task (NAB), we give network sizes and recognition speeds and accuracies using bigram and trigram grammars with vocabulary sizes ranging from 10000 to 160000 words. With our construction, the fully-expanded NAB context-dependent networks contain only about twice as many arcs as the corresponding language models. Interestingly, we also find that, with these networks, real-time word accuracy is improved by increasing the vocabulary size and n-gram order. <s> BIB002
The focus of this paper is on the representations of the search space used in a number of decoding strategies for large vocabulary continuous speech recognition (LVCSR). It appears indeed that the specific way of handling the underlying search network constitutes one central feature of any decoder, and reveals some of the common elements and real differences among various decoding schemes. In particular, different solutions of structuring the search space can be compared with respect to the integration of linguistic and acoustic constraints, as implied by m-gram language models (LM) and cross-word (CW) phonetic contexts. This study has been motivated by recent advances made in large vocabulary decoding, concerning both the achieved level of practical performance as well as the emergence of a new method for building a large vocabulary decoder. Near real-time capabilities are now quite common using low-cost (500 MHz) personal computers, even for difficult tasks like broadcast news transcription. Interestingly, similar levels of performance are achieved using quite different decoding strategies and architectures . In addition, a full expansion of the search network has been shown to be feasible using the weighted finite state transducers (WFST) framework developed at AT&T BIB002 . This is quite a significant departure from the former belief that dynamic expansion could be the only viable approach to LVCSR with long range LM because of the huge potential search space, and this in turn deserves our attention to understand what made this evolution possible. Before going further, the scope of the present study has to be made clear: • The emphasis is on LVCSR using long-span LM like trigrams. Applications dealing with very large item lists (for example, names or addresses related to directory assistance) are not considered here (see, for example, BIB001 ). • References are by no means exhaustive and were chosen to illustrate some "prototypical" cases. 1 • Little attention is given to multiple-pass decoding and to the use of word-graphs. These topics, while clearly important in developing a recognizer, deserve a more specific study than can be given here. • Likelihood computations will not be considered here though they often represent an important part of the overall decoding cost, especially with mixtures of continuous distributions. A number of methods can be applied to drastically reduce the complexity of the mixture density calculations (see, for example, . Hence, we focus on the "pure" search aspects and on the influence of basic design choices upon the overall complexity and efficiency of a one-pass CW m-gram decoder. This study has been structured along two main axes, namely, • static vs. dynamic expansion of the search space; • time-synchronous vs. asynchronous decoding. As will be shown in the following, the decoder's architecture is deeply shaped by the interactions between these two main lines. The paper is organized as follows. The general decoding problem is first formulated in the Bayesian probabilistic framework for hidden Markov models (HMM) and the concept of early recombination is introduced because of its key role in the efficient search for the "best" state sequence. The main actions that have to be carried out by any LVCSR decoder are described. The usual knowledge sources involved in a large vocabulary CW m-gram decoder are reviewed in Section 3, including the use of a phonetic prefix tree. This leads to the representation of the whole search space in terms of a finite re-entrant network. A convenient coordinate system is also introduced. Section 4 presents a broad classification of decoding methods in a tree-like diagram, based on the main axes of network expansion and search strategy. The following two sections are devoted to a review of the principal decoding approaches: Section 5 gives a short presentation of methods that lead to a full static expansion of the search network by exploiting the inherent sparsity and redundancies of the knowledge sources and Section 6 is devoted to dynamic network expansion techniques. Two basic ways for dynamically exploring a tree-structured m-gram network can be distinguished. These are the re-entrant lexical tree method and the start synchronous tree method emphasizing, respectively, the role of the word linguistic contexts and of the word start times. These two search avenues are further explained in the framework of either a time-synchronous dynamic programming (DP) search or an asynchronous stack decoder. In Section 7, two heuristic methods are briefly described for reducing further the size of the search space beyond standard beam pruning capabilities, either by constraining the word start times 2 or by looking ahead into the acoustic content of the signal. Section 8 addresses the methodology suitable for evaluating a "real" decoder and presents some experimental evidence drawn from recent DARPA evaluation results for broadcast news transcription. It appears that there is no clear dominant method so far and that the success of any decoder lies in the optimized integration of several components. Finally, a number of pros and cons of the main search approaches are proposed in the conclusion and several directions are considered regarding what could be promising avenues for further improvements in large vocabulary decoding.
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Prefix-tree organization of the lexicon <s> The need for ever more efficient search organizations persists as the size and complexity of the knowledge sources used in continuous speech recognition (CSR) tasks continues to increase. We address efficiency issues associated with a search organization based on pronunciation prefix trees (PPTs). In particular we present (1) a mechanism that eliminates redundant computations in non-reentrant trees, (2) a comparison of two methods for distributing language model probabilities in PPTs, and (3) report results on two look ahead pruning strategies. Using the 1994 DARPA 20 k NAB word bigram for the male segment of si dev5m 92 (the 5k speaker independent development test set for the WSJ), the error rate was 12.2% with a real-time factor of 1.0 on a 120 MHz Pentium. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Prefix-tree organization of the lexicon <s> Presents an efficient look-ahead technique which incorporates the language model knowledge at the earliest possible stage during the search process. This so-called language model look-ahead is built into the time-synchronous beam search algorithm using a tree-organized pronunciation lexicon for a bigram language model. The language model look-ahead technique exploits the full knowledge of the bigram language model by distributing the language model probabilities over the nodes of the lexical tree for each predecessor word. We present a method for handling the resulting memory requirements. The recognition experiments performed on the 20,000-word North American Business task (Nov. 1996) demonstrate that, in comparison with the unigram look-ahead, a reduction by a factor of 5 in the acoustic search effort can be achieved without loss in recognition accuracy. <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Prefix-tree organization of the lexicon <s> To cope with the prohibitive growth of lexical tree based search-graphs when using cross-word context dependent (CD) phone models, an efficient novel search-topology was developed. The lexicon is stored as a compact static network with no language model (LM) information attached to it. The static representation avoids the cost of dynamic tree expansion, facilitates the integration of additional pronunciation information (e.g. assimilation rules) and is easier to integrate in existing search engines. Moreover, the network representation also results in a compact structure when words have alternative pronunciations, and due to its construction, it offers partial LM forwarding at no extra cost. Next, all knowledge sources (pronunciation information, language model and acoustic models) are combined by a slightly modified token-passing algorithm, resulting in a one pass time-synchronous recognition system. <s> BIB003 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Prefix-tree organization of the lexicon <s> This paper proposes an efficient method for large-vocabulary continuous-speech recognition, using a compact data structure and an efficient search algorithm. We introduce a very compact data structure DAWG as a lexicon to reduce the search space. We also propose a search algorithm to obtain the N-best hypotheses using the DAWG structure. This search algorithm is composed of two phases: "forward search" and "traceback". Forward search, which basically uses the time-synchronous Viterbi algorithm, merges candidates and stores the information about them in DAWG structures to create phoneme graphs. Traceback traces the phoneme graphs to obtain the N-best hypotheses. An evaluation of this method's performance using a speech-recognition-based telephone-directory-assistance system having a 4000-word vocabulary confirmed that our strategy improves speech recognition in terms of time and recognition rate. <s> BIB004 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Prefix-tree organization of the lexicon <s> The paper describes the improved handling of multiple pronunciations achieved in the Philips research decoder by (1) incorporating some prior information about their distributions and (2) combining the acoustic contributions of concurrent alternate word hypotheses. Starting from a baseline system where multiple pronunciations are treated as word copies without priors, an extension of the usual Viterbi decoding is presented which integrates unigram priors in a weighted sum of acoustic probabilities. Several approximations are discussed leading to new decoding aspects. Experimental results are presented for US broadcast news recordings. It is shown that the use of unigram priors has a clear positive impact on both error rate and decoding cost while the sum over multiple pronunciation contributions brings another small improvement. An overall 4% reduction of the error rate is achieved on the HUB-4 evaluation sets of 97 and 98. <s> BIB005
The lexicon defines the list of words with their phonetic transcriptions in terms of a small number of context-independent phoneme symbols. Some word entries may have multiple pronunciations, possibly with prior (unigram) probabilities associated to their occurrence frequencies BIB005 . Structuring the lexicon as a phonetic prefix tree is widely applied since it provides a compact representation with a reduced number of arcs, especially at the word beginnings where most of the search effort occurs. Indeed, when using a stochastic m-gram LM each word of the lexicon is a possible successor to every hypothesized word end. Hence, sharing the common word stems results in a dramatic reduction of the number of phone arcs needed to generate the next word startup hypotheses. The prefix tree can be built from contextindependent phoneme transcriptions or expanded with context-dependent phones (like triphones), the number of arcs in the first generation being then increased from a few tens to several hundreds. A problem inherent with the use of a prefix tree is that word identities are only defined at leaves which delays the integration of the LM probabilities (see Fig. 1 ). The solution consists in factorizing the word probabilities across the phonetic arcs, a technique called LM smearing or forwarding BIB001 Ortmanns, BIB002 . This achieves a double effect of (1) anticipating the word probabilities and (2) smoothing the score distributions across the phonetic arcs, both factors being highly beneficial for the pruning efficacy. Another interesting side effect consecutive to LM smearing concerns the handling of the so-called "linear tails" in the prefix tree. These phonetic arc sequences appear when a given word no longer shares any arc with another word such that the factorized LM probabilities are equal to one. These tails can be merged across all linguistic contexts and this happens at any point of the lexical tree from which a word identity is uniquely determined. More general structures than a straightforward prefix tree can lead to larger reductions of the lexical redundancies, for example, by merging both the identical word beginnings and endings into a phonetic network. 5 However, the construction of such a network is more complex and the decoding algorithm has to be modified since word identities are no longer uniquely defined at leaves as happens in a prefix tree, which also has some influence on the smearing of LM scores (see BIB003 BIB004 .
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Central role of the m-gram LM in the search network <s> From the Publisher: ::: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. ::: ::: ::: 0201000296B04062001 <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Central role of the m-gram LM in the search network <s> The authors consider the estimation of powerful statistical language models using a technique that scales from very small to very large amounts of domain-dependent data. They begin with improved modeling of the grammar statistics, based on a combination of the backing-off technique and zero-frequency techniques. These are extended to be more amenable to the particular system considered here. The resulting technique is greatly simplified, more robust, and gives improved recognition performance over either of the previous techniques. The authors also consider the problem of robustness of a model based on a small training corpus by grouping words into obvious semantic classes. This significantly improves the robustness of the resulting statistical grammar. A technique that allows the estimation of a high-order model on modest computation resources is also presented. This makes it possible to run a 4-gram statistical model of a 50-million word corpus on a workstation of only modest capability and cost. Finally, the authors discuss results from applying a 2-gram statistical language model integrated in the HMM (hidden Markov model) search, obtaining a list of the N-best recognition results, and rescoring this list with a higher-order statistical model. > <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Central role of the m-gram LM in the search network <s> This paper presents an efficient way of representing a bigram language model for a beam-search based, continuous speech, large vocabulary HMM recognizer. The tree-based topology considered takes advantage of a factorization of the bigram probability derived from the bigram interpolation scheme, and of a tree organization of all the words that can follow a given one. Moreover, an optimization algorithm is used to considerably reduce the space requirements of the language model. Experimental results are provided for two 10,000-word dictation tasks: radiological reporting (perplexity 27) and newspaper dictation (perplexity 120). In the former domain 93% word accuracy is achieved with real-time response and 23 Mb process space. In the newspaper dictation domain, 88.1% word accuracy is achieved with 1.41 real-time response and 38 Mb process space. All recognition tests were performed on an HP-735 workstation. <s> BIB003 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Central role of the m-gram LM in the search network <s> Reports on experiments of porting the ITC-irst Italian broadcast news recognition system to two spontaneous dialogue domains. The trade-off between performance and the required amount of task specific data was investigated. Porting was experimented by applying supervised adaptation methods to acoustic and language models. By using two hours of manually transcribed speech, word error rates of 26.0% and 28.4% were achieved by the adapted systems. Two reference systems, developed on a larger training corpus, achieved word error rates of 22.6% and 21.2%, respectively. <s> BIB004
Coming back to the use of an m-gram LM, the probability of a word in an unseen context is generally obtained with an interpolation scheme involving shorter history counts, for example, bigram and unigram counts for a trigram. Along this line, back-off "null" nodes BIB002 have long been used in several systems to take advantage of the small fraction of observed bi-or trigrams. Figure 6 explains the use of a null node in an interpolated backing-off bigram model where λ(U ) is the backing-off normalization factor and P(V ) the unigram prior. When a word-pair has not been "seen" in the training corpus, the bigram probability is factorized in two terms P(V ) × λ(U ) without conditional dependency on the predecessor word U . Consequently, for these unseen word-pairs, recombination can be done similarly to the unigram case, on the null node at each word ending, without the need of so-called word copies. This backing-off property has been exploited in BIB003 for carrying out a static tree-based representation of a bigram network. The prefix tree of the whole lexicon appears only once at the "null" node while the other predecessor nodes BIB003 . are connected each to a much smaller subtree constructed from the words that belong to the bigrams seen in this linguistic context. As indicated in Fig. 7 below, the average number of phonetic arcs in the subtrees is very small, being reduced by a factor of 600 with respect to the whole tree, due to the characteristics of this radiology task, presumably of low perplexity. Another advantage of static tree-based networks is the ease of factorizing the true m-gram probabilities which can be smeared across phone sequences from leaf to root, while building the search network. This results in many linear arc sequences appearing at word endings with an incremental probability of one, consecutive to the factorization process. These linear tails (see Fig. 1 ) are redundant paths that can be merged and treated in common for all linguistic contexts of that word BIB003 . Other less trivial redundancies are further reduced by applying general optimization techniques developed in the framework of finite state automata BIB001 , leading to another compaction of the network by a factor of two to three. Interestingly, the main impact of these additional redundancy removals is memory saving and not speed-up of the decoding BIB003 , since most of the search effort is spent in the first two tree generations due to the focusing capabilities of beam pruning. This technique has been recently extended to a trigram LM with multiple backingoff nodes as illustrated in BIB004 .
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Weighted finite state transducer method (WFST) <s> We present the concepts of weighted language, transduction and automaton from algebraic automata theory as a general framework for describing and implementing decoding cascades in speech and language processing. This generality allows us to represent uniformly such information sources as pronunciation dictionaries, language models and lattices, and to use uniform algorithms for building decoding stages and for optimizing and combining them. In particular, a single automata join algorithm can be used either to combine information sources such as a pronunciation dictionary and a context-dependency model during the construction of a decoder, or dynamically during the operation of the decoder. Applications to speech recognition and to Chinese text segmentation will be discussed. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Weighted finite state transducer method (WFST) <s> To cope with the prohibitive growth of lexical tree based search-graphs when using cross-word context dependent (CD) phone models, an efficient novel search-topology was developed. The lexicon is stored as a compact static network with no language model (LM) information attached to it. The static representation avoids the cost of dynamic tree expansion, facilitates the integration of additional pronunciation information (e.g. assimilation rules) and is easier to integrate in existing search engines. Moreover, the network representation also results in a compact structure when words have alternative pronunciations, and due to its construction, it offers partial LM forwarding at no extra cost. Next, all knowledge sources (pronunciation information, language model and acoustic models) are combined by a slightly modified token-passing algorithm, resulting in a one pass time-synchronous recognition system. <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Weighted finite state transducer method (WFST) <s> We combine our earlier approach to context-dependent network representation with our algorithm for determining weighted networks to build optimized networks for large-vocabulary speech recognition combining an n-gram language model, a pronunciation dictionary and context-dependency modeling. While fully-expanded networks have been used before in restrictive settings (medium vocabulary or no cross-word contexts), we demonstrate that our network determination method makes it practical to use fully-expanded networks also in large-vocabulary recognition with full cross-word context modeling. For the DARPA North American Business News task (NAB), we give network sizes and recognition speeds and accuracies using bigram and trigram grammars with vocabulary sizes ranging from 10000 to 160000 words. With our construction, the fully-expanded NAB context-dependent networks contain only about twice as many arcs as the corresponding language models. Interestingly, we also find that, with these networks, real-time word accuracy is improved by increasing the vocabulary size and n-gram order. <s> BIB003 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Weighted finite state transducer method (WFST) <s> The redundancy and the size of networks in large-vocabulary speech recognition systems can have a critical effect on their overall performance. We describe the use of two new algorithms: weighted determinization and minimization (Mohri, 1997a). These algorithms transform recognition labeled networks into equivalent ones that require much less time and space in large-vocabulary speech recognition. They are both optimal: weighted determinization eliminates the number of alternatives at each state to the minimum, and weighted minimization reduces the size of deterministic networks to the smallest possible number of states and transitions. These algorithms generalize classical automata determinization and minimization to deal properly with the probabilities of alternative hypotheses and with the relationships between units (distributions, phones, words) at different levels in the recognition system. We illustrate their use in several applications, and report the results of our experiments. <s> BIB004 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Weighted finite state transducer method (WFST) <s> Although finite-state transducers have been widely used in linguistics, their application to speech recognition has begun only recently (M. Mohri, 1997). We describe our implementation of French large vocabulary recognition based on transducers, and how we take advantage of this approach to integrate automatic pronunciation rules and cross-word phenomena such as French "liaison". We also show that a simple, single-level Viterbi algorithm can efficiently decode speech recognition transducers and handle cross-word context models and cross-word phonological rules. In our experiments we compared network size, error rate and decoding speed of our transducer based recognizer against a baseline HTK recognizer, on a large vocabulary French dictation task. Transducers reduced search time by a factor of 25 compared to our HTK recognizer. We also examined the effect of automated pronunciation rules, and their combination with crossword phonological rules that control "liaison". We obtained a 23% relative reduction in the word error rate on a 5000 word task. <s> BIB005 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Weighted finite state transducer method (WFST) <s> This paper explores the use of the phone and syllable as primary units of representation in the first stage of a two-stage recognizer. A finite-state transducer speech recognizer is utilized to configure the recognition as a two-stage process, where either phone or syllable graphs are computed in the first stage, and passed to the second stage to determine the most likely word hypotheses. Preliminary experiments in a weather information speech understanding domain show that a syllable representation with either bigram or trigram language models provides more constraint than a phonetic representation with a higher-order n-gram language model (up to a 6-gram), and approaches the performance of a more conventional single-stage word-based configuration. <s> BIB006
This approach is the outcome of several research years at AT&T and has recently reached the point of becoming an attractive alternative for building a large vocabulary decoder (see a.o. BIB001 BIB003 BIB004 BIB005 BIB006 , and elsewhere in this special issue). It offers an elegant unified framework for representing the knowledge sources and producing a search network optimized up to the HMM state level. Along this line, it integrates and extends the main ideas exposed in the previous section devoted to the central role of the LM in the search network. The WFST approach is very briefly sketched as follows: (1) transducers are finite state networks associating input and output symbols on each arc possibly weighted with a log probability value. They can be used for representing all knowledge sources involved in LVCSR like a lexicon with pronunciation variants, stochastic m-grams or deterministic phone-in-context expansion rules; (2) transducers can be combined using the composition operator, leading to the integration of the underlying modeling levels in one input-output relation. For example, using the symbol "o" for the composition operator, {C o L o G} would provide a mapping from context-dependent phones up to word sequences, the transducers G, L and C representing, respectively, the grammar model, the context-independent lexicon and the context-dependent phones; (3) the network is further optimized by weighted determinization followed by minimization, two techniques borrowed from finite state automata theory. An optional step that comes after the network has been determinized, consists of "pushing" the weights towards the initial state much like the already described language smearing technique BIB004 . The order in which the individual transducers are composed and optimized might also play a role in obtaining the most compact search network. The third point deserves some comments concerning the criteria that are pursued for optimizing the network structure. Determinization aims-ideally-at getting a network where any input sequence is matched by, at most, one path, thus reducing the computer time and space for decoding (Mohri, Pereira, Riley & AT&T Labs Research, 2000) . In practice, this is a complex task implying a.o. the elimination of all empty arcs such that the total number of arcs might be increased in the determinized network. When applying the WFST method, this pre-processing step is the one that requires the largest computational resources, especially in terms of memory needs. Note that WFST can also be used "on the fly" during decoding and not only for getting a static network expansion. However, this prevents from doing a global optimization of the network and makes the decoder more complex. Some of the main achievements of WFST can be summarized as follows: • the knowledge sources are handled in a highly flexible way, independently of the decoder specifics, for example, about the contextual scope of the linguistic or phonetic constraints; • the final optimized network is typically a few times larger than the original LM in terms of number of arcs; • CW context expansion increases the network by just a few percent with respect to the optimized context-independent network BIB003 . This last point is quite remarkable and results from postponing the context expansion after having taken advantage of the m-gram sparsities and lexical redundancies such that, presumably, relatively few fan-out expansions are still necessary. Many issues remain open and are currently under study. One direction of work points to the network pre-processing stages and aims at reducing the memory requirements which appear like a bottleneck when very large (language) models are considered. A second direction concerns the best way to handle the final network: should it be loaded in central memory or could it be left on disk and efficiently accessed on-demand? Another related topic concerns the possibility of expanding and optimizing statically some of the knowledge sources while handling the others dynamically. In BIB002 , such a "hybrid" approach has been presented where the lexical and phonetic constraints are optimized statically, the LM being decoupled from the other knowledge sources. We will come back to this point in the final conclusion.
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Dynamic search network expansion <s> Most current attempts at automatic speech recognition are formulated in an artificial intelligence framework. In this paper we approach the problem from an information-theoretic point of view. We describe the overall structure of a linguistic statistical decoder (LSD) for the recognition of continuous speech. The input to the decoder is a string of phonetic symbols estimated by an acoustic processor (AP). For each phonetic string, the decoder finds the most likely input sentence. The decoder consists of four major subparts: 1) a statistical model of the language being recognized; 2) a phonemic dictionary and statistical phonological rules characterizing the speaker; 3) a phonetic matching algorithm that computes the similarity between phonetic strings, using the performance characteristics of the AP; 4) a word level search control. The details of each of the subparts and their interaction during the decoding process are discussed. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Dynamic search network expansion <s> This paper describes a data-driven organization of the dynamic programming beam search for large vocabulary, continuous speech recognition. This organization can be viewed as an extension of the one-pass dynamic programming algorithm for connected word recognition. In continuous speech recognition we are faced with a huge search space, and search hypotheses have to be formed at the 10-ms level. The organization of the search presented has the following characteristics. Its computational cost is proportional only to the number of hypotheses actually generated and is independent of the overall size of the potential search space. There is no limit on the number of word hypotheses, there is only a limit to the overall number of hypotheses due to memory constraints. The implementation of the search has been studied and tested on a continuous speech data base comprising 20672 words. <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Dynamic search network expansion <s> The authors describe the improvements in a time-synchronous beam search strategy for a 10000-word continuous-speech recognition task. Basically they introduced two measures, namely a tree organization of the pronunciation lexicon and a novel look-ahead technique at the phoneme level. The experimental tests performed showed that the number of state hypotheses could be reduced from 50000 to 3000, i.e., by a factor of about 17. At the same time, the word error rate did not increase. > <s> BIB003 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Dynamic search network expansion <s> To achieve reasonable accuracy in large vocabulary speech recognition systems, it is important to use detailed acoustic models together with good long span language models. For example, in the Wall Street Journal (WSJ) task both cross-word triphones and a trigram language model are necessary to achieve state-of-the-art performance. However, when using these models, the size of a pre-compiled recognition network can make a standard Viterbi search infeasible and hence, either multiple-pass or asynchronous stack decoding schemes are typically used. In this paper, we show that time-synchronous one-pass decoding using cross-word triphones and a trigram language model can be implemented using a dynamically built tree-structured network. This approach avoids the compromises inherent in using fast-matches or preliminary passes and is relatively efficient in implementation. It was included in the HTK large vocabulary speech recognition system used for the 1993 ARPA WSJ evaluation and experimental results are presented for that task. <s> BIB004 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Dynamic search network expansion <s> In this paper, we present a novel, efficient search strategy for large vocabulary continuous speech recognition. The search algorithm, based on a stack decoder framework, utilizes phone-level posterior probability estimates (produced by a connectionist/hidden Markov model acoustic model) as a basis for phone deactivation pruning-a highly efficient method of reducing the required computation. The single-pass algorithm is naturally factored into the time-asynchronous processing of the word sequence and the time-synchronous processing of the hidden Markov model state sequence. This enables the search to be decoupled from the language model while still maintaining the computational benefits of time-synchronous processing. The incorporation of the language model in the search is discussed and computationally cheap approximations to the full language model are introduced. Experiments were performed on the North American Business News task using a 60000 word vocabulary and a trigram language model. Results indicate that the computational cost of the search may be reduced by more than a factor of 40 with a relative search error of less than 2% using the techniques discussed in the paper. <s> BIB005 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Dynamic search network expansion <s> This paper presents the time-conditioned approach in dynamic programming search for large-vocabulary continuous-speech recognition. The following topics are presented: the baseline algorithm, a time-synchronous beam search version, a comparison with the word-conditioned approach, a comparison with stack decoding. The approach has been successfully tested on the NAB task using a vocabulary of 64000 words. <s> BIB006
Integrating the network expansion in the decoding process has received considerable attention, partly from necessity because of the potential search space size, but also motivated by the self-focusing property of beam search. Applying the best knowledge sources from the outset to get the strongest constraints on the search space has been the key idea leading to one-pass decoders based on dynamically built networks BIB001 BIB002 BIB003 BIB004 ). An important aspect has consisted, so far, of assuming the regularity of the network structure to deal with the "CW m-gram" constraints, mainly for pragmatic reasons related to algorithmic complexity. Along this line, a phonetic prefix-tree organization of the lexicon has imposed itself as a generic building block of the network, since it offers a good tradeoff between simplicity and compactness at word start. It must be understood that this generic tree structure is only stored once and that the search network will be constructed partially and dynamically using virtual nodes and temporary structures containing only the necessary information to process the expanded hypotheses. Another "key" feature concerns the point of view adopted for structuring the m-gram search space where the emphasis can be placed either on the linguistic context or on the start time of a word. Hence, two basic approaches are identified for dynamically exploring an m-gram treestructured network and generating word sequences of increasing length: • the re-entrant tree where a virtual tree copy is explored for each active linguistic context. This information remains "attached" to each path 6 and recombination is performed at virtual root nodes that depend on the history taken into account by the LM. This method is also known as the "word-conditioned search" . This is illustrated in Fig. 8 ; • the start-synchronous tree where a virtual tree copy is being entered at each time requesting successor word hypotheses. All paths having reached a word end hypothesis at a given time are thus extended by exploring the same virtual tree copy associated with this starting time. The terminology has been suggested in BIB005 , but the method has also been described as the "time-conditioned search" BIB006 . Figure 9 explains this second strategy. Both methods have been applied in the framework of time-synchronous dynamic programming search while the second has also been investigated in multi-stack asynchronous decoders.
An overview of decoding techniques for large vocabulary continuous speech recognition <s> The word history conditioned DP organization <s> The need for ever more efficient search organizations persists as the size and complexity of the knowledge sources used in continuous speech recognition (CSR) tasks continues to increase. We address efficiency issues associated with a search organization based on pronunciation prefix trees (PPTs). In particular we present (1) a mechanism that eliminates redundant computations in non-reentrant trees, (2) a comparison of two methods for distributing language model probabilities in PPTs, and (3) report results on two look ahead pruning strategies. Using the 1994 DARPA 20 k NAB word bigram for the male segment of si dev5m 92 (the 5k speaker independent development test set for the WSJ), the error rate was 12.2% with a real-time factor of 1.0 on a 120 MHz Pentium. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> The word history conditioned DP organization <s> This paper describes the new Philips Research decoder that performs large vocabulary continuous speech recognition in a single pass for cross-word acoustic models and an m-gram language model (with m up to 4) as opposed to our previous technique of multiple passes. The decoder is based on a time-synchronous beam search and a prex tree structure of the lexicon. Cross-word transitions are treated dynamically. A language-model look-ahead technique is applied on the bigram probabilities. On a variety of speech data, reduced error rates are obtained together with signi cant speed-ups con rming the advantage of an early use of all available knowledge sources. In particular, the search e ort of a one-pass trigram decoding is only marginally increased compared to bigram and the integration of cross-word triphones improves the overall accuracy by typically 10% relative. <s> BIB002
The hypotheses active at the "current" time are recorded in lists structured on a three-level hierarchy as shown in Fig. 10 . The leading variable is the word history which is coded as an m-gram state index. This means that the three dependent coordinates of the search space are spanned in the following order: LM-State → Arc-Id → State-Id, the emphasis being on the common predecessor word history shared by the next words being expanded. An . Search organization using the per-state stack BIB001 . additional structure keeps track of the "back-pointers" to retrieve the preceding words up to the sentence start. This algorithm has been introduced in and further extended in and BIB002 .
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Integration of CW contexts <s> To achieve reasonable accuracy in large vocabulary speech recognition systems, it is important to use detailed acoustic models together with good long span language models. For example, in the Wall Street Journal (WSJ) task both cross-word triphones and a trigram language model are necessary to achieve state-of-the-art performance. However, when using these models, the size of a pre-compiled recognition network can make a standard Viterbi search infeasible and hence, either multiple-pass or asynchronous stack decoding schemes are typically used. In this paper, we show that time-synchronous one-pass decoding using cross-word triphones and a trigram language model can be implemented using a dynamically built tree-structured network. This approach avoids the compromises inherent in using fast-matches or preliminary passes and is relatively efficient in implementation. It was included in the HTK large vocabulary speech recognition system used for the 1993 ARPA WSJ evaluation and experimental results are presented for that task. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Integration of CW contexts <s> This paper describes the new Philips Research decoder that performs large vocabulary continuous speech recognition in a single pass for cross-word acoustic models and an m-gram language model (with m up to 4) as opposed to our previous technique of multiple passes. The decoder is based on a time-synchronous beam search and a prex tree structure of the lexicon. Cross-word transitions are treated dynamically. A language-model look-ahead technique is applied on the bigram probabilities. On a variety of speech data, reduced error rates are obtained together with signi cant speed-ups con rming the advantage of an early use of all available knowledge sources. In particular, the search e ort of a one-pass trigram decoding is only marginally increased compared to bigram and the integration of cross-word triphones improves the overall accuracy by typically 10% relative. <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Integration of CW contexts <s> The paper presents recent improvements of the RWTH large vocabulary continuous speech recognition system (LVCSR). In particular, we report on the integration of across-word models into the first recognition pass, and describe better algorithms for fast vocal tract normalization (VTN). We focus both on improvements in word error rate and how to speed up the recognizer with only minimal loss of recognition accuracy. Implementation details and experimental results are given for the VerbMobil task, a German spontaneous speech corpus. The 25.0% word error rate (WER) of our within-word baseline system was reduced to 21.4% with VTN and across-word models. Decreasing the real-time factor (RTF) by up to 85% resulted in only a small degradation in recognition performance of 2% relative on average. <s> BIB003
In a time-synchronous dynamic search, CW phone modelling implies that multiple contexts have to be considered at word ending to anticipate for the next successor words. This leads to the concept of fan-out expansion BIB001 where the last phonetic arc of a word is given several instances, each with a distinct right conditioning context. This step can be made more efficient by (1) taking advantage of the redundancy among tied phone HMMs to reduce the fan-out size BIB003 and (2) applying a special language look-ahead pruning scheme BIB002 . Indeed, the identity of the right context restricts the set of successor words such that the most promising fan-out arcs can be selected by taking the coming m-gram scores into account. The m-gram recombination at word ends has to take account of this successor constraint BIB002 and can possibly be performed before expanding the fan-out. Finally, when re-entering the tree, the phonetic arcs have to be selected according to the fan-out right context and the left context of these first generation arcs is specified by the last phone(s) of the previous word. This implies either an "on the fly" instantiation of the corresponding contextdependent HMM or using multiple instances of the first generation arcs. Figure 12 illustrates the main lines of CW transitions in the framework of a re-entrant prefix tree, where two cases of optional pauses are considered between consecutive words: the first one concerns a "short" silence compatible with across-word coarticulation while the second case is a "long" pause that can be followed by any word of the lexicon, as indicated by the wild card symbol * BIB002 .
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Time-synchronous search based on start-synchronous trees <s> Word graphs are directed acyclic graphs where each edge is labeled with a word and a score, and each node is labeled with a point in time. Word graphs form an efficient feedforward interface between continuous-speech recognition and linguistic processors. Word graphs with high coverage and modest graph densities can be generated with a computational load comparable with bigram best-sentence recognition. Results on word graph error rates and word graph densities are presented for the ASL (Architecture Speech/Language) benchmark test. > <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Time-synchronous search based on start-synchronous trees <s> Compares the search effort required by the word-conditioned and the time-conditioned tree search methods. Both methods are based on a time-synchronous, left-to-right beam search using a tree-organized lexicon. Whereas the word-conditioned method is well-known and widely used, the time-conditioned method is novel in the context of 20,000-word vocabulary recognition. We extend both methods to handle trigram language models in a one-pass strategy. Both methods were tested on a train schedule inquiry task (1,850 words, telephone speech) and on the North American Business development corpus (20,000 words) of November 1994. <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Time-synchronous search based on start-synchronous trees <s> This paper describes two methods for constructing word graphs for large vocabulary continuous speech recognition. Both word graph methods are based on a time-synchronous, left-to-right beam search strategy in connection with a tree-organized pronunciation lexicon. The first method is based on the so-called word pair approximation and fits directly into a word-conditioned search organization. In order to avoid the assumptions made in the word pair approximation, we design another word graph method. This method is based on a time conditioned factoring of the search space. For the case of a trigram language model, we give a detailed comparison of both word graph methods with an integrated search method. The experiments have been carried out on the North American Business (NAB'94) 20,000-word task. <s> BIB003
The main idea here is to share a single one-word extension step among paths ending at the same "current" time, however, with different m-gram histories. In this way, the DP timealignment process is done at most once per word model and per start time, avoiding the need for the so-called word copies. The search space is thus structured on the start time of the successor words which are hypothesized by propagating the DP recurrence across one prefix tree started at that time. As a consequence, the word-boundary optimization is no longer performed implicitly as for the re-entrant tree, and has to be carried out in a separate step occurring at each word ending. This is beneficial for generating dense word graphs BIB001 BIB003 but makes the word-end recombination step quite expensive BIB002 . This is, however, compensated by
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Stack to be expanded 1st <s> This paper describes the details of a fast, memory-efficient one-pass stack decoder for efficient evaluation of the search space for large vocabulary continuous speech recognition. A modern, efficient search engine is not based on a single idea, but is a rather complex collection of separate algorithms and practical implementation details, which only in combination make the search efficient in time and memory requirements. Being the core of a speech recognition system, the software design phase for a new decoder is often crucial for its later performance and flexibility. This paper tries to emphasize this point?after defining the requirements for a modern decoder, it describes the details of an implementation that is based on a stack decoder framework. It is shown how it is possible to handle arbitrary order N -grams, how to generate N -best lists or lattices next to the first-best hypothesis at little computational overhead, how to handle efficiently cross-word acoustic models of any context order, how to efficiently constrain the search with word graphs or word-pair grammars, and how to use a fast-match with delay to speed up the search, all in a single left-to-right search pass. The details of a disk-based representation of an N -gram language model are given, which make it possible to use language models (LMs) of arbitrary (file) size in only a few hundred kB of memory. On-demand N -gram smearing, an efficient improvement over the regular unigram smearing used as an approximation to the LM scores in a tree lexicon, is introduced. It is also shown how lattice rescoring, the generation of forced alignments and detailed phone-/state-alignments can efficiently be integrated into a single stack decoder. The decoder named “Nozomi"11“Nozomi" is the name of the fastest, most comfortable and most expensive bullet train in Japan, and also means “hope" in Japanese.was tested on a Japanese newspaper dictation task using a 5000 word vocabulary. Using computationally cheap models it is possible to achieve real-time performance with 89% word recognition accuracy at about 1% search error using only 4 MB of total memory on a 300 MHz Pentium II. With computationally more expensive acoustic models, which also cover for the Japanese language essential cross-word effects, more than 95% recognition accuracy22These are currently the best reported results on this task.These are currently the best reported results on this task. is reached. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Stack to be expanded 1st <s> This paper presents the time-conditioned approach in dynamic programming search for large-vocabulary continuous-speech recognition. The following topics are presented: the baseline algorithm, a time-synchronous beam search version, a comparison with the word-conditioned approach, a comparison with stack decoding. The approach has been successfully tested on the NAB task using a vocabulary of 64000 words. <s> BIB002
Hypothesis Node Search Space Figure 13 . Search expansion in a multi-stack decoder BIB001 . the size of the activated search space which appears almost independent of the LM complexity. A remarkable feature is that the average number of active tree hypotheses per time frame is about the average word duration BIB002 . On the other hand, this architecture appears less favourable for CW context expansions and for m-gram look-ahead pruning schemes involving more than just unigram probabilities. This is a direct consequence of sharing the prefix-tree expansion among simultaneously ending words bearing different linguistic and phonetic (left) contexts (see Fig. 9 ) as opposed to the re-entrant tree where paths with distinct histories are grouped accordingly. To the best of my knowledge, this method has only been tested with unigram language smearing and for non-cross-word models, so far.
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> Most current attempts at automatic speech recognition are formulated in an artificial intelligence framework. In this paper we approach the problem from an information-theoretic point of view. We describe the overall structure of a linguistic statistical decoder (LSD) for the recognition of continuous speech. The input to the decoder is a string of phonetic symbols estimated by an acoustic processor (AP). For each phonetic string, the decoder finds the most likely input sentence. The decoder consists of four major subparts: 1) a statistical model of the language being recognized; 2) a phonemic dictionary and statistical phonological rules characterizing the speaker; 3) a phonetic matching algorithm that computes the similarity between phonetic strings, using the performance characteristics of the AP; 4) a word level search control. The details of each of the subparts and their interaction during the decoding process are discussed. <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> The stack decoder is an attractive algorithm for controlling the acoustic and language model matching in a continuous speech recognizer. The author previously described a near-optimal admissible Viterbi A* search algorithm for use with non-crossword acoustic models and no-grammar language models (1991). This algorithm is extended to include unigram language models, and a modified version of the algorithm which includes the full (forward) decoder, cross-word acoustic models and longer-span language models is described. The resultant algorithm is not admissible, but has been demonstrated to have a low probability of search error and to be very efficient. > <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> We describe a tree search strategy, called the Envelope Search, which is a time-asynchronous search scheme that combines aspects of the A* heuristic search algorithm with those of the time-synchronous Viterbi search algorithm. This search technique is used in the large-vocabulary continuous speech recognition system developed at the IBM Research Center. <s> BIB003 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> In this paper, we present a novel, efficient search strategy for large vocabulary continuous speech recognition. The search algorithm, based on a stack decoder framework, utilizes phone-level posterior probability estimates (produced by a connectionist/hidden Markov model acoustic model) as a basis for phone deactivation pruning-a highly efficient method of reducing the required computation. The single-pass algorithm is naturally factored into the time-asynchronous processing of the word sequence and the time-synchronous processing of the hidden Markov model state sequence. This enables the search to be decoupled from the language model while still maintaining the computational benefits of time-synchronous processing. The incorporation of the language model in the search is discussed and computationally cheap approximations to the full language model are introduced. Experiments were performed on the North American Business News task using a 60000 word vocabulary and a trigram language model. Results indicate that the computational cost of the search may be reduced by more than a factor of 40 with a relative search error of less than 2% using the techniques discussed in the paper. <s> BIB004 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> This paper describes a new search technique for large vocabulary speech recognition based on a stack decoder. Considerable memory savings are achieved with the combination of a tree based lexicon and a new search technique. The search proceeds time-first, that is partial path hypotheses are extended into the future in the inner loop and a tree walk over the lexicon is performed as an outer loop. Partial word hypotheses are grouped based on language model state. The stack maintains information about groups of hypotheses and whole groups are extended by one word to form new stack entries. An implementation is described of a one-pass decoder employing a 65000 word lexicon and a disk-based trigram language model. Real time operation is achieved with a small search error, a search space of about 5 Mbyte and a total memory usage of about 35 Mbyte. <s> BIB005 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> This paper describes an algorithm for improvement of the speed of a time-asynchronous fast match, which is a part of a stack-search based recognition system. This fast match uses a phonetic tree to represent the entire vocabulary of the recognizer. Evaluation of the tree (in a depthrst manner), can be done much more e ciently using the fact that under certain conditions, the results of branch evaluations can be used to approximate the scores of other branches of the tree. <s> BIB006 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Asynchronous stack decoding <s> This paper describes the details of a fast, memory-efficient one-pass stack decoder for efficient evaluation of the search space for large vocabulary continuous speech recognition. A modern, efficient search engine is not based on a single idea, but is a rather complex collection of separate algorithms and practical implementation details, which only in combination make the search efficient in time and memory requirements. Being the core of a speech recognition system, the software design phase for a new decoder is often crucial for its later performance and flexibility. This paper tries to emphasize this point?after defining the requirements for a modern decoder, it describes the details of an implementation that is based on a stack decoder framework. It is shown how it is possible to handle arbitrary order N -grams, how to generate N -best lists or lattices next to the first-best hypothesis at little computational overhead, how to handle efficiently cross-word acoustic models of any context order, how to efficiently constrain the search with word graphs or word-pair grammars, and how to use a fast-match with delay to speed up the search, all in a single left-to-right search pass. The details of a disk-based representation of an N -gram language model are given, which make it possible to use language models (LMs) of arbitrary (file) size in only a few hundred kB of memory. On-demand N -gram smearing, an efficient improvement over the regular unigram smearing used as an approximation to the LM scores in a tree lexicon, is introduced. It is also shown how lattice rescoring, the generation of forced alignments and detailed phone-/state-alignments can efficiently be integrated into a single stack decoder. The decoder named “Nozomi"11“Nozomi" is the name of the fastest, most comfortable and most expensive bullet train in Japan, and also means “hope" in Japanese.was tested on a Japanese newspaper dictation task using a 5000 word vocabulary. Using computationally cheap models it is possible to achieve real-time performance with 89% word recognition accuracy at about 1% search error using only 4 MB of total memory on a 300 MHz Pentium II. With computationally more expensive acoustic models, which also cover for the Japanese language essential cross-word effects, more than 95% recognition accuracy22These are currently the best reported results on this task.These are currently the best reported results on this task. is reached. <s> BIB007
This approach stems from sequential decoding methods developed some decades ago in communication theory BIB001 . In LVCSR, a stack decoder implements a best-first tree search which proceeds by extending, word by word, one or several selected hypotheses without the constraint that they all end at the same time. Running hypotheses are handled using a stack which is a priority queue sorted on likelihood scores and possibly on time. Depending on the implementation, there may be one single stack BIB002 or several stacks , each one grouping the theories associated to the same end-time. Distinguishing between uni-and multi-stack techniques will not be further considered here, being outside the scope of this overview. Compared to time-synchronous beam search, there are three specific problems to be solved in a stack decoder: • which theory(ies) should be selected for extension? • how to efficiently compute one-word continuations? • how to get "reference" score values for pruning? The first point relates to the use of heuristics (known as A * principle), and essentially depends on which information is available regarding the not yet decoded part of the sentence. In a multi-pass strategy, a first decoding can provide an estimation of the probability for the "remaining" part. A good example of such a situation is the computation of N -best sentence hypotheses in a scored word graph where a first backward pass can provide at each node the score of the best path up to the sentence end. For a one-pass decoder, however, an estimation of the future path likelihood can only be obtained by a look-ahead technique. An alternative that does not need looking-ahead in the signal has been presented in BIB002 and relies on least upper bounds taken on the path scores that have been expanded so far. In practice, this leads to a "shortest best path" choice BIB003 BIB004 with the consequence that the search space expansion becomes quasi-synchronous i.e. without large end time differences between active theories. The one-word extensions can either be computed with the start-synchronous tree method of previous section or using a fast-match algorithm to first get a short list of word candidates that are then processed sequentially for continuing one (or more) theory BIB003 BIB006 . This fast-match component typically relies on a lexical prefix tree and on simplified acoustic models to achieve the highest efficiency. 8 Concerning the start-synchronous lexical tree exploration, it is worth pointing out that this step has been achieved either with a standard time-synchronous DP scheme or with a "time-first" asynchronous method BIB005 , the latter requiring less memory storage. Pruning is non-trivial due to the difficulty of comparing the scores of paths having different lengths. The solution consists of progressively updating the best likelihood scores that can be achieved along the time axis by a path having complete word extensions. This requires storing temporarily the score sequences of the respective paths. Broadly speaking, this leads to the concept of envelope defined as the lowest upper bound of the individual score "profiles" of the paths expanded so far BIB003 . Based on the current score envelope, a path may be labelled as active or not and this decision may be reconsidered in the course of the decoding process. Last, integrating CW phonetic contexts is easily achieved in two stages BIB003 BIB007 by considering left-only conditioned contexts first, and after the one-word extension has been accomplished, proceeding to a re-scoring with the now available right context. The main difference and relative advantage with respect to a timesynchronous search is that CW contexts can be applied on individual word strings with great ease for incorporating longer context ranges (at least to the left), without the need of fan-out expansions. Likewise are long-range LM constraints easily integrated, the recombination step being subjected to the already known dominance principle.
An overview of decoding techniques for large vocabulary continuous speech recognition <s> Delayed LM incorporation with heuristic boundary optimization <s> The authors introduce a new, more efficient algorithm, the word-dependent N-best algorithm, for finding multiple sentence hypotheses. The proposed algorithm is based on the assumption that the beginning time of a word depends only on the preceding word. The authors compare this algorithm with two other algorithms for finding the N-best hypotheses: the exact sentence-dependent method and a computationally efficient lattice N-best method. Although the word-dependent algorithm is computationally much less expensive than the exact algorithm, it appears to result in the same accuracy. The lattice method, which is still more efficient, has a significantly higher error rate. It is demonstrated that algorithms that use Viterbi scoring have significantly higher error rates than those that use total likelihood scoring. > <s> BIB001 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Delayed LM incorporation with heuristic boundary optimization <s> We address the problem of using word graphs (or lattices) for the integration of complex knowledge sources like long span language models or acoustic cross-word models, in large vocabulary continuous speech recognition. A method for efficiently constructing a word graph is reviewed and two ways of exploiting it are presented. By assuming the word pair approximation, a phrase level search is possible while in the other case a general graph decoder is set up. We show that the predecessor-word identity provided by a first bigram decoding might be used to constrain the word graph without impairing the next pass. This procedure has been applied to 64 k-word trigram decoding in conjunction with an incremental unsupervised speaker adaptation scheme. Experimental results are given for the North American Business corpus used in the November '94 evaluation. <s> BIB002 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Delayed LM incorporation with heuristic boundary optimization <s> Abstract We describe a new search algorithm for speech recognition which applies the monotone graph search procedure to the problem of building a word graph. A first backward pass provides a method for estimating the word boundary times and phone segment boundary times needed to build the word graph using either the 1-phone or 2-phone lookahead assumptions. It also provides a heuristic for the search which satisfies the monotonicity condition. A second backward pass applies forward–backward pruning to the word graph. We show how the search can be made to run very quickly if the 1-phone lookahead assumption holds. We present the results of experiments performed on the 5000-word speaker-independent Wall Street Journal task under both the 1-phone and 2-phone lookahead assumptions. These results show that the 1-phone lookahead assumption leads to unacceptably large error rates for speaker-independent recognition using current acoustic phonetic modelling techniques. Finally, we give an account of the methods we have developed to process speech data in successive blocks so as to address the real-time issue and to control the memory requirements of the search. <s> BIB003 </s> An overview of decoding techniques for large vocabulary continuous speech recognition <s> Delayed LM incorporation with heuristic boundary optimization <s> High computational effort hinders wide-spread deployment of large-vocabulary continuous-speech recognition (LVCSR), for example in home or mobile devices. To this end, we developed a novel approach to LVCSR Viterbi decoding with significantly reduced effort. By a novel search-space organization called virtual hypothesis copies, we eliminate search-space copies that are approximately redundant: 1) Word-lattice generation and (M+1)-gram lattice rescoring are integrated into a single-pass time-synchronous beam search. Hypothesis copying becomes independent from the language-model order. 2) The word-pair approximation is replaced by the novel phone-history approximation (PHA). Tree copies are shared among multiple linguistic histories that end in the same phone(s). 3) Copies of individual tree arcs are shared by recombining within-word hypotheses at phone boundaries according to the PHA. At no loss of accuracy, we achieve a search-space reduction of 60-80% for Mandarin LVCSR, and of 40-50% for English (NAB 64 K). The method is exact under certain model assumptions. A formal specification is derived. In addition, we propose an extremely effective syllable lookahead for Mandarin. Together with the methods above, search space was reduced 12-15 times and state likelihood evaluations 4-9 times without significant error increase. <s> BIB004
This search technique appears somewhat hybrid in the sense that it borrows from both the re-entrant tree and start-synchronous tree methods to reach the highest efficiency. The general idea is to assume that the word boundary depends on a narrower context (<m words), possibly phonetically motivated, such that the word expansion and boundary time τ obtained by extending the best theory(ies) can be shared among other m-gram word histories. It is understood that for these alternative phrases the LM will be applied after the word expansion has been completed. This fits with the concept of delayed LM incorporation which has been shown to significantly reduce the active search space by eliminating redundant m-gram word copies. An example of such strategy is given by the word pair approximation BIB001 BIB002 which assumes that the word boundary depends only on the current and immediate predecessor words i.e. τ = f (v, w t ; t). LM re-scoring can be subsequently performed with a higher m-gram order (m > 2) at very little cost, provided the word hypotheses have been stored in a lattice-like structure based on the word-pair optimized boundaries. As described in and BIB004 , this can actually be achieved in one single decoding pass thus taking advantage of the m-gram LM to get a more focused search. In practice, the word-pair approximation has been shown to work quite well but for some very short words like one-phoneme function words BIB002 . This leads to the idea that the word boundary should actually be made dependent on phoneme histories rather than on linguistic word sequences. In BIB003 , the estimation of word and phone boundary times has been investigated in the context of a monotone graph search strategy, using either a one-phone or a two-phone look-ahead assumption. Based on U.S.-English experiments, the authors conclude that triphone modelling for speaker-independent recognition can be supported by the two-phone approximation, the onephone case leading to unacceptable inaccuracies. A similar approach has been recently pursued in BIB004 in the framework of a timesynchronous re-entrant tree decoder, where a phone-history approximation is introduced, assuming that the optimal word boundary only depends on the last p phones of a given active path. Experiments have been carried out for Mandarin recognition using half-syllable units and for U.S.-English as well, based on non-cross-word triphones. For LVCSR of Mandarin Chinese, a relative reduction of the search space (in terms of active state hypotheses) in the range of 60-80% has been achieved at no loss of accuracy, which is partly attributed to the structural properties of Chinese as a mono-syllabic language. For U.S.-English on the North American English test-sets, the reduction appears to be in the range of 40-50%, showing clearly the potential offered by appropriate word boundary heuristics to speed-up an m-gram search process.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> I. INTRODUCTION <s> Spectrum sharing is a powerful alternative to deal with the exponential increase on the wireless communication capacity demand. In this context, the coexistence of two of the most prominent wireless technologies today, Long Term Evolution (LTE) and Wi-Fi, is an important research topic. In the most common Wi-Fi network operation, the Distributed Coordination Function (DCF), communication nodes access the channel only if the interference level is below a certain threshold. Then, Wi-Fi operation is severely affected when in coexistence with LTE. This paper proposes the use of LTE uplink (UL) power control to improve LTE/Wi-Fi coexistence. With the introduction of an additional factor to the conventional LTE UL power control, a controlled decrease of LTE UL transmit powers is carried out according to interference measurements, giving opportunity to Wi-Fi transmissions. The proposed LTE UL power control with interference aware power operating point is a flexible tool to deal with the trade-off between LTE and Wi-Fi performances in coexistence, since it is able to set different LTE/Wi-Fi coexistence configurations with the choice of a single parameter. Simulation results show that the proposed approach can provide similar or better performance for both LTE and Wi-Fi networks than a previously proposed interference avoidance mechanism. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> I. INTRODUCTION <s> Innovations enabling efficient spectrum utilization is a key element to optimize user experience with growing data demand. This paper discusses the approach of extending enhancements in cellular technology like LTE to unlicensed band for higher spectral efficiency and better user experience. A key challenge for such extension is the coexistence with legacy technology such as Wi-Fi. The description herein highlights techniques for effective coexistence. The results include evaluation and lab data that demonstrate how the technology provide benefit to surrounding Wi-Fi deployment and contribute towards enhancing spectral efficiency of the unlicensed band. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> I. INTRODUCTION <s> The phenomenal growth of mobile data demand has brought about increasing scarcity in available radio spectrum. Meanwhile, mobile customers pay more attention to their own experience, especially in communication reliability and service continuity on the move. To address these issues, LTE-Unlicensed, or LTEU, is considered one of the latest groundbreaking innovations to provide high performance and seamless user experience under a unified radio technology by extending LTE to the readily available unlicensed spectrum. In this article, we offer a comprehensive overview of the LTEU technology from both operator and user perspectives, and examine its impact on the incumbent unlicensed systems. Specifically, we first introduce the implementation regulations, principles, and typical deployment scenarios of LTE-U. Potential benefits for both operators and users are then discussed. We further identify three key challenges in bringing LTE-U into reality together with related research directions. In particular, the most critical issue of LTE-U is coexistence with other unlicensed systems, such as widely deployed WiFi. The LTE/WiFi coexistence mechanisms are elaborated in time, frequency, and power aspects, respectively. Simulation results demonstrate that LTE-U can provide better user experience to LTE users while well protecting the incumbent WiFi users’ performance compared to two existing advanced technologies: cellular/WiFi interworking and licensed-only heterogeneous networks (Het-Nets). <s> BIB003
A S THE rapid progress and pleasant experience of Internet-based services, there is an increasing demand for high data rate in wireless communications systems such that the growth of mobile traffic in the next decade is over one thousand times . However, since the usable licensed spectrum is of limited physical extent, new licensed frequency bands are becoming rare and expensive. To respond to increased wireless communication capacity demand, the innovation focusing on such techniques that enable better use of different types of spectrum for traffic offload, including unlicensed bands, is urgently needed BIB002 . It is assumed that up to thirty percent of broadband access in cellular networks can be offloaded to unlicensed bands, primarily Wi-Fi networks until now . The extension of LTE-LAA over 5 GHz UNII band and the requirement to provide fair coexistence of LTE-LAA with other technologies working on 5 GHz are two major observations of the ongoing discussion on the 3rd Generation Partnership Project (3GPP) BIB003 , . While considering the coexistence of Wi-Fi and LTE-LAA in 5 GHz UNII spectrum, designers should ensure that LTE-LAA can coexist with Wi-Fi fairly and friendly in unlicensed band by complying with regulatory requirements of the local government in a region. In some markets, like the U.S., South Korea and China, there is no Listen-Before-Talk (LBT) requirement. Without changing LTE air interface protocol, coexistence with Wi-Fi in those scenarios can be realized for LTE Release 10-12 by using specific techniques such as Carrier Sense Adaptive Transmission (CSAT). In markets like Europe and Japan where LBT is required, however, LTE air interface would need changes with the introduction of LBT feature potentially in 3GPP Release 13 . To the best of our knowledge, current research mainly aims at such mechanisms as capable of enabling the coexistence of LTE-LAA and Wi-Fi. It should be noticed that the coexistence performance of LTE-LAA and Wi-Fi in 5 GHz UNII spectrum would vary a lot in different deployment scenarios. Take the early coexistence results in and for example, the ratio of the downlink (DL) throughput gain of LTE-LAA to that of Wi-Fi would be different if the simulation scenario changes from outdoor to indoor. The throughput also differs when an operator chooses to place Picocells uniformly or in a hotspot region . Both LTE-LAA and Wi-Fi have their own benefits and cannot be replaced by each other at the moment BIB001 . The performance of either LTE-LAA or Wi-Fi should be maintained and not be affected by each other while deployed in 5 GHz spectrum together. Focusing on those important issues, this paper surveys the coexistence of LTE-LAA and Wi-Fi on 5 GHz with corresponding deployment scenarios, and introduces a scenariooriented decision-making method for coexistence. The rest of the paper is organized in the following manner. In Section II, we provide a comparative study of existing LTE surveys and this paper. In Section III, relevant features of Wi-Fi and LTE-LAA are overviewed. In Section IV, we first overview the coexistence mechanisms related researches. Then we review the LTE-LAA and Wi-Fi coexistence testing and results to present a picture of the research stage in the community. 1553-877X c 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. We also investigate the current coexistence mechanisms in both markets where LBT is required or not, so as to evaluate their influence on wireless service. In addition, we also provide lessons learnt from different coexistence mechanisms comparison and Cognitive Radio (CR), as well as propose recommendations and guidelines for ensuring fairness. In Section V, we analyze eight key influential factors for the classification of SC scenarios, demonstrate several representative scenarios, and dissect an example of deployment scenario to highlight the concept of the scenario-oriented coexistence for different access applications. We further recommend performance evaluation scenarios and metrics. In Section VI, we discuss future research trends. Finally, we conclude in Section VII. For convenience, please refer to the Table I for all acronyms in the paper.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> This paper provides a holistic overview of multi-cell scheduling strategies in emerging wireless systems. Towards this objective, the evolution of interference management techniques is thoroughly investigated from simple inter-cell interference coordination (ICIC) techniques towards more advanced coordinated multipoint transmissions (CoMP), while comparing and contrasting their common features and differences. Finally CoMP is explored in detail as an advanced and challenging mechanism to fully cooperate between adjacent cells in order to have an efficient resource allocation and inter-cell interference mitigation in multi-cell environments. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> Future generation cellular networks are expected to provide ubiquitous broadband access to a continuously growing number of mobile users. In this context, LTE systems represent an important milestone towards the so called 4G cellular networks. A key feature of LTE is the adoption of advanced Radio Resource Management procedures in order to increase the system performance up to the Shannon limit. Packet scheduling mechanisms, in particular, play a fundamental role, because they are responsible for choosing, with fine time and frequency resolutions, how to distribute radio resources among different stations, taking into account channel condition and QoS requirements. This goal should be accomplished by providing, at the same time, an optimal trade-off between spectral efficiency and fairness. In this context, this paper provides an overview on the key issues that arise in the design of a resource allocation algorithm for LTE networks. It is intended for a wide range of readers as it covers the topic from basics to advanced aspects. The downlink channel under frequency division duplex configuration is considered as object of our study, but most of the considerations are valid for other configurations as well. Moreover, a survey on the most recent techniques is reported, including a classification of the different approaches presented in literature. Performance comparisons of the most well-known schemes, with particular focus on QoS provisioning capabilities, are also provided for complementing the described concepts. Thus, this survey would be useful for readers interested in learning the basic concepts before going into the details of a particular scheduling strategy, as well as for researchers aiming at deepening more specific aspects. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued. Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> In order to satisfy the requirements of future IMT-Advanced mobile systems, the concept of spectrum aggregation is introduced by 3GPP in its new LTE-Advanced (LTE Rel. 10) standards. While spectrum aggregation allows aggregation of carrier components (CCs) dispersed within and across different bands (intra/inter-band) as well as combination of CCs having different bandwidths, spectrum aggregation is expected to provide a powerful boost to the user throughput in LTE-Advanced (LTE-A). However, introduction of spectrum aggregation or carrier aggregation (CA) as referred to in LTE Rel. 10, has required some changes from the baseline LTE Rel. 8 although each CC in LTE-A remains backward compatible with LTE Rel. 8. This article provides a review of spectrum aggregation techniques, followed by requirements on radio resource management (RRM) functionality in support of CA. On-going research on the different RRM aspects and algorithms to support CA in LTE-Advanced are surveyed. Technical challenges for future research on aggregation in LTE-Advanced systems are also outlined. <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> The choice of OFDM-based multi-carrier access techniques for LTE marked a fundamental and farsighted parting from preceding 3GPP networks. With OFDMA in the downlink and SC-FDMA in the uplink, LTE possesses a robust and adaptive multiple access scheme that facilitates many physical layer enhancements. Despite this flexibility, scheduling in LTE is a challenging functionality to design, especially in the uplink. Resource allocation in LTE is made complex, especially when considering its target packet-based services and mobility profiles, both current and emerging, in addition to the use of several physical layer enhancements. In this paper, we offer a tutorial on scheduling in LTE and its successor LTE-Advanced. We also survey representative schemes in the literature that have addressed the scheduling problem, and offer an evaluation methodology to be used as a basis for comparison between scheduling proposals in the literature. <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> This paper provides an overview of multi-cell coordinated scheduling and multiple-input multiple-output (MIMO) techniques in LTE. We first discuss single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) for LTE and identify potential techniques for further improving the performance of LTE, such as enhanced channel state information feedback and adaptive switching between SU-MIMO and MU-MIMO. Then, we address inter-cell interference mitigation techniques for downlink and uplink, respectively. Finally, potential research challenges and topics in the area are discussed. <s> BIB006 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> High demands for broadband mobile wireless communications and the emergence of new wireless multimedia applications constitute the motivation to the development of broadband wireless access technologies in recent years. The Long Term Evolution/System Architecture Evolution (LTE/SAE) system has been specified by the Third Generation Partnership Project (3GPP) on the way towards fourth-generation (4G) mobile to ensure 3GPP keeping the dominance of the cellular communication technologies. Through the design and optimization of new radio access techniques and a further evolution of the LTE systems, the 3GPP is developing the future LTE-Advanced (LTE-A) wireless networks as the 4G standard of the 3GPP. Since the 3GPP LTE and LTE-A architecture are designed to support flat Internet Protocol (IP) connectivity and full interworking with heterogeneous wireless access networks, the new unique features bring some new challenges in the design of the security mechanisms. This paper makes a number of contributions to the security aspects of the LTE and LTE-A networks. First, we present an overview of the security functionality of the LTE and LTE-A networks. Second, the security vulnerabilities existing in the architecture and the design of the LTE and LTE-A networks are explored. Third, the existing solutions to these problems are classically reviewed. Finally, we show the potential research issues for the future research works. <s> BIB007 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> Support of femtocells is an integral part of the Long Term Evolution - Advanced (LTE-A) system and a key enabler for its wide adoption in a broad scale. Femtocells are short-range, low-power and low-cost cellular stations which are installed by the consumers in an unplanned manner. Even though current literature includes various studies towards understanding the main challenges of interference management in the presence of femtocells, little light has been shed on the open issues of mobility management (MM) in the two-tier macrocell-femtocell network. In this paper, we provide a comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution. A detailed overview of the respective MM procedures in the LTE-A system is also provided to better comprehend the solutions and open issues posed in real-life systems. Based on the discussion for the HO decision phase, we subsequently survey and classify existing HO decision algorithms for the two-tier macrocell-femtocell network, depending on the primary HO decision criterion used. For each class, we overview up to three representative algorithms and provide detailed flowcharts to describe their fundamental operation. A comparative summary of the main decision parameters and key features of selected HO decision algorithms concludes this work, providing insights for future algorithmic design and standardization activities. <s> BIB008 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future. <s> BIB009 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> Resource allocation and link adaptation in Long-Term Evolution (LTE) and LTE Advanced are discussed with focus on the location and formatting of the pertinent reference and control signals, as well as the decisions they enable. In particular, after reviewing the units for resource allocation and the time-frequency resource grid, the enabled resource-allocation modes and their purposes are reviewed. A detailed description of the way the resource allocations are encoded under these different modes is also given. Similarly, the various methods of link adaptation, including power control and rate control, both through the use of adaptive modulation and coding and hybrid automatic repeat request, are reviewed. The control signaling encoding for link adaptation is provided in detail, as is the encoding of channel state feedback for the purposes of link adaptation and resource allocation. <s> BIB010 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> In this paper, we focus our attention on the cooperative uplink transmissions of systems beyond the LTE-Advanced initiative. We commence a unified treatment of the principle of single-carrier frequency-division multiple-access (FDMA) and the similarities and dissimilarities, advantages, and weakness of the localized FDMA, the interleaved FDMA, and the orthogonal FDMA systems are compared. Furthermore, the philosophy of both user cooperation and cooperative single-carrier FDMA is reviewed. They are investigated in the context of diverse topologies, transmission modes, resource allocation, and signal processing techniques applied at the relays. Benefits of relaying in LTE-Advanced are also reviewed. Our discussions demonstrate that these advanced techniques optimally exploit the resources in the context of cooperative single-carrier FDMA system, which is a promising enabler for various uplink transmission scenarios. <s> BIB011 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> Among the LTE-A communication techniques, Device-to-Device (D2D) communication which is defined to directly route data traffic between spatially closely located mobile user equipments (UEs), holds great promise in improving energy efficiency, throughput, delay, as well as spectrum efficiency. As a combination of ad-hoc and centralized communication mechanisms, D2D communication enables researchers to merge together the long-term development achievements in previously disjoint domains of ad-hoc networking and centralized networking. To help researchers to have a systematic understanding of the emerging D2D communication, we provide in this paper a comprehensive survey of available D2D related research works ranging from technical papers to experimental prototypes to standard activities, and outline some open research problems which deserve further studies. <s> BIB012 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> II. COMPARATIVE STUDY OF EXISTING SURVEYS ON LTE AND THIS PAPER <s> The Internet of Things (IoT) aims at connecting a very large number of devices using an Internet-like architecture. Machine-to-machine (M2M) networks are considered the main component of the IoT. Long-term evolution (LTE) and LTE-advanced (LTE-A) are excellent candidates for supporting M2M communications due to their native IP connectivity and scalability for massive number of devices. Therefore, LTE schedulers should be capable of satisfying the needs of M2M devices such as timing constraints and specific quality of service (QoS) requirements. In this paper, we present a survey on uplink scheduling techniques over LTE and LTE-A from an M2M perspective. We focus on the aspects associated with M2M communications; namely, power efficiency, QoS support, multihop connectivity, and scalability for massive number of devices. <s> BIB013
In order to provide a broader perspective, as well as to give directions to readers about the key distributions of this survey, we illustrate a comparative study of the existing surveys on LTE and this paper. Here we investigate several representative surveys reviewing the LTE-related technologies from different aspects. Authors in first review the evolution of LTE physical (PHY) layer control channels. Moreover, in BIB003 - BIB004 , authors focus on radio resource management (RRM) for LTE and LTE Advanced (LTE-A) from different angles. To be more specific, authors in BIB003 demonstrate Heterogeneous Networks (HetNets), particularly on femto cells and relay nodes. In BIB004 , authors study RRM for spectrum aggregation. Resource allocation and link adaptation are overviewed in BIB010 . What's more, in BIB013 - BIB001 , authors review the Uplink (UL) or Downlink (DL) scheduling from different angles. In particular, authors in BIB013 classify LTE UL scheduling from the perspective of Machine-to-machine (M2M) communications. In BIB011 , authors demonstrate cooperative UL transmissions beyond LTE-A system. In BIB005 , authors summary UL scheduling in LTE and LTE-A. Authors in BIB002 demonstrate DL packet scheduling in LTE cellular network. Multi-cell coordinated scheduling, particularly inter-cell interference mitigation techniques for DL and UL are reviewed in BIB006 . As a supplement to BIB006 , multi-cell scheduling strategies in LTE and LTE-A are also overviewed in BIB001 . In addition, there are also some surveys discussing corresponding techniques enabling communications in LTE networks. In [20] , authors review M2M communications in the context of LTE and LTE-A. Authors in BIB012 review Device-to-Device communications in LTE networks. Security aspects for LTE and LTE-A networks are overviewed in BIB007 . In BIB008 , authors also review the mobility management support in LTE-A networks. Authors discuss alternatives to improve the operation of the random access channel of LTE and LTE-A in BIB009 . Unlike these surveys which are targeted only for a single Radio Access Network (RAN), i.e., LTE, this paper focuses on the study of LTE-LAA and Wi-Fi coexistence in 5 GHz, including coexistence-related features, coexistence considerations, deployment scenarios for the coexistence and scenario-oriented decision making. Table II shows a brief summary of the related survey papers on LTE and this article. For a better understanding of coexistence mechanisms between LTE-LAA and Wi-Fi, brief summaries of several coexistence-related features of the two technologies are reviewed in this section.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) 5 GHz Unlicensed Spectrum Under Consideration: <s> The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) 5 GHz Unlicensed Spectrum Under Consideration: <s> With the exponential growth in mobile data traffic taking place currently and projected into the future, mobile operators need cost effective ways to manage the load of their networks. Traditionally, this has been achieved by offloading mobile traffic onto Wi-Fi networks due to their low cost and ubiquitous deployment. Recently, LTE operating in the unlicensed spectrum has drawn significant interests from mobile operators due to the availability of the unlicensed spectrum. However, the deployment of LTE networks in the unlicensed band poses significant challenges to the performance of current and future Wi-Fi networks. We discuss the LTE and Wi-Fi coexistence challenges and present analysis on performance degradation of the Wi-Fi networks at the presence of LTE. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) 5 GHz Unlicensed Spectrum Under Consideration: <s> Systems and methods for Carrier Sense Adaptive Transmission (CSAT) and related operations in unlicensed spectrum are disclosed to reduce interference between co-existing Radio Access Technologies (RATs). The parameters for a given CSAT communication scheme may be adapted dynamically based on received signals from a transceiver for a native RAT to be protected and an identification of how that RAT is utilizing a shared resource such as an unlicensed band. Other operations such as Discontinuous Reception (DRX) may be aligned with a CSAT Time Division Multiplexed (TDM) communication pattern by way of a DRX broadcast / multicast message. Different TDM communication patterns may be staggered in time across different frequencies. Channel selection for a co-existing RAT may also be configured to afford further protection to native RATs by preferring operation on secondary channels as opposed to primary channels. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) 5 GHz Unlicensed Spectrum Under Consideration: <s> The phenomenal growth of mobile data demand has brought about increasing scarcity in available radio spectrum. Meanwhile, mobile customers pay more attention to their own experience, especially in communication reliability and service continuity on the move. To address these issues, LTE-Unlicensed, or LTEU, is considered one of the latest groundbreaking innovations to provide high performance and seamless user experience under a unified radio technology by extending LTE to the readily available unlicensed spectrum. In this article, we offer a comprehensive overview of the LTEU technology from both operator and user perspectives, and examine its impact on the incumbent unlicensed systems. Specifically, we first introduce the implementation regulations, principles, and typical deployment scenarios of LTE-U. Potential benefits for both operators and users are then discussed. We further identify three key challenges in bringing LTE-U into reality together with related research directions. In particular, the most critical issue of LTE-U is coexistence with other unlicensed systems, such as widely deployed WiFi. The LTE/WiFi coexistence mechanisms are elaborated in time, frequency, and power aspects, respectively. Simulation results demonstrate that LTE-U can provide better user experience to LTE users while well protecting the incumbent WiFi users’ performance compared to two existing advanced technologies: cellular/WiFi interworking and licensed-only heterogeneous networks (Het-Nets). <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) 5 GHz Unlicensed Spectrum Under Consideration: <s> We consider the adoption of listen-before-talk (LBT) for Long Term Evolution (LTE) in unlicensed spectrum, which is currently under discussion in the 3rd Generation Partnership Project (3GPP). The first and foremost task to be done to allow LTE to use unlicensed spectrum is to study the coexistence with incumbent system, such as the Wireless Local Area Network (WLAN). In our study, we observe that a proper coexistence mechanism is imperative especially when both LTE Evolved Node Bs (eNBs) and WLANs are located outdoor. The outdoor deployment scenario is considered by the next generation WLAN, the IEEE 802.11ax, with high importance. The implementation of LBT in LTE plays an effective role in this scenario in balancing the performance between LTE and WLAN. In the conventional scenario where WLANs are located indoor and LTE eNBs are located outdoor, a plain coexistence without additional mechanism could be acceptable due to high penetration loss of outdoor walls. <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) 5 GHz Unlicensed Spectrum Under Consideration: <s> Resource allocation and link adaptation in Long-Term Evolution (LTE) and LTE Advanced are discussed with focus on the location and formatting of the pertinent reference and control signals, as well as the decisions they enable. In particular, after reviewing the units for resource allocation and the time-frequency resource grid, the enabled resource-allocation modes and their purposes are reviewed. A detailed description of the way the resource allocations are encoded under these different modes is also given. Similarly, the various methods of link adaptation, including power control and rate control, both through the use of adaptive modulation and coding and hybrid automatic repeat request, are reviewed. The control signaling encoding for link adaptation is provided in detail, as is the encoding of channel state feedback for the purposes of link adaptation and resource allocation. <s> BIB006
For the sake of clearer channel conditions, wider spectrum, and easier implementation, the unlicensed frequency band of common interest in 3GPP is the 5 GHz UNII band mainly used by IEEE 802.11-based Wireless Local Area Network (WLAN), or Wi-Fi currently BIB004 . With regard to the availability of 5 GHz spectrum, different countries have their regional requirements on 5 GHz UNII band in the form of regulations or rules. As shown in Fig. 1 . 2) LTE-LAA and LTE-U: It should be taken into account that the transmission relying only on unlicensed spectrum is unstable since the nature of being unlicensed makes it hard to provide guaranteed Quality of Service (QoS) BIB004 . Therefore, it seems unreasonable to ignore the usage of licensed spectrum during the extension of LTE spectrum access. To allow users to access both licensed and unlicensed spectrum and to study the use of unlicensed band under a unified LTE network infrastructure, LTE-LAA is initiated as part of 3GPP LTE Release 13 , BIB005 . According to the design, LTE-LAA in unlicensed spectrum is an extension of the LTE carrier aggregation protocol - . LTE-LAA on unlicensed band is always combined with licensed band LTE and is replacing the current terminology of LTE-U BIB004 , which is a natural extension of LTE carrier aggregation to unlicensed band as a part of secondary carriers. Besides using unlicensed spectrum targeting at 5 GHz UNII band at present, LTE-LAA tends to include every kind of technology that would augment licensed spectrum operation . As it requires fewer modifications from licensed LTE compared to LTE-LAA, LTE-U will be the first version of LTE unlicensed to be available in commercial deployments. However, because it does not implement LBT mechanisms, LTE-U can only be used in markets where regulation does not require LBT, such as China, South Korea, India and the USA. LTE-LAA, on the other hand, is the version of LTE in unlicensed band that 3GPP standardizes in Release 13. It supports LBT in addition to carrier aggregation. LTE-LAA is set to become a global standard as it strives to meet regulatory requirements worldwide. Nevertheless, because the standardization work had not been completed until March 2016, commercialization will take longer than for LTE-U. For details referring to the LBT mechanisms, see Section III-D. For more details about LTE carrier aggregation, refer to - BIB006 , [20] , and BIB001 . 3) Integration of LTE Licensed and LTE Unlicensed: As stated above, if there is additional capacity demand, to manage the different component carriers, carrier aggregation may be employed with one carrier serving as the Primary Cell (PCell) and others serving as Secondary Cells (SCells) BIB002 , BIB003 . The unlicensed spectrum may be employed by cellular systems in different ways, distinguished by the supplementary and control channel configurations shown in Fig. 2 . In some systems, the aggregation is based on what is supported in 3GPP Release 12 [38] . In this case, the second carrier would be a Time-Division-Duplexed (TDD) carrier or Supplemental DL (SDL) only. In the SDL mode, the unlicensed band is used to carry data traffic originally staying in the licensed spectrum, while the UL and control channel remain in the licensed spectrum. In the TDD carrier aggregation mode, the unlicensed band is capable of carrying data traffic in both UL and DL directions while the control channel remains in the licensed spectrum. In other systems, the unlicensed spectrum may be employed in a standalone configuration, with all carriers operating in the unlicensed spectrum exclusively. A representative LTE-based technology for unlicensed spectrum without licensed anchor channel is called MuLTEfire alliance, which is formed by Nokia and Qualcomm . It is a solution that may be attractive to cable operators, wireless Internet service providers or hotspot network operators who lack licensed spectrum. This mode has not been discussed in 3GPP yet.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> Preface Foreword 1. Introduction Part I. Physical layer: 2. Orthogonal frequency division multiplexing 3. MIMO/SDM basics 4. PHY interoperability with 11a/g legacy OFDM devices 5. High throughput 6. Robust performance Part II. Media access control layer: 7. Media access control 8. MAC throughput enhancements 9. Advanced channel access techniques 10. Interoperability and coexistence 11. MAC frame formats Part III. Transmit beamforming: 12. Transmit beamforming Acronyms Index. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued. Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> The choice of OFDM-based multi-carrier access techniques for LTE marked a fundamental and farsighted parting from preceding 3GPP networks. With OFDMA in the downlink and SC-FDMA in the uplink, LTE possesses a robust and adaptive multiple access scheme that facilitates many physical layer enhancements. Despite this flexibility, scheduling in LTE is a challenging functionality to design, especially in the uplink. Resource allocation in LTE is made complex, especially when considering its target packet-based services and mobility profiles, both current and emerging, in addition to the use of several physical layer enhancements. In this paper, we offer a tutorial on scheduling in LTE and its successor LTE-Advanced. We also survey representative schemes in the literature that have addressed the scheduling problem, and offer an evaluation methodology to be used as a basis for comparison between scheduling proposals in the literature. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> In order to satisfy the requirements of future IMT-Advanced mobile systems, the concept of spectrum aggregation is introduced by 3GPP in its new LTE-Advanced (LTE Rel. 10) standards. While spectrum aggregation allows aggregation of carrier components (CCs) dispersed within and across different bands (intra/inter-band) as well as combination of CCs having different bandwidths, spectrum aggregation is expected to provide a powerful boost to the user throughput in LTE-Advanced (LTE-A). However, introduction of spectrum aggregation or carrier aggregation (CA) as referred to in LTE Rel. 10, has required some changes from the baseline LTE Rel. 8 although each CC in LTE-A remains backward compatible with LTE Rel. 8. This article provides a review of spectrum aggregation techniques, followed by requirements on radio resource management (RRM) functionality in support of CA. On-going research on the different RRM aspects and algorithms to support CA in LTE-Advanced are surveyed. Technical challenges for future research on aggregation in LTE-Advanced systems are also outlined. <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> The Long Term Evolution (LTE) in unlicensed spectrum is an emerging topic in the 3rd Generation Partnership Project (3GPP), which is about an operation of the LTE system in the unlicensed spectrum via license-assisted carrier aggregation. The 5 GHz Unlicensed National Information Infrastructure (U-NII) bands are currently under consideration, but these bands are also occupied by Wireless Local Area Networks (WLAN), specifically those based on the IEEE 802.11a/n/ac technologies. Therefore, an appropriate coexistence mechanism must be augmented to guarantee a peaceful coexistence with the incumbent systems. With this regard, our focus lies on the evaluation of all the proposed coexistence mechanisms so far in a single framework and making a fair comparison of them. The coexistence mechanisms covered in this work includes static muting, listen-before-talk (LBT), and other sensing-based schemes that make a use of the existing WLAN channel reservation protocol. <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> One of the effective ways to address the exponentially increasing traffic demand in mobile communication systems is to use more spectrum. Although licensed spectrum is always preferable for providing better user experience, unlicensed spectrum can be considered as an effective complement. Before moving into unlicensed spectrum, it is essential to carry out proper coexistence performance evaluations. In this paper, we analyze WiFi 802.11n and Long Term Evolution (LTE) coexistence performance considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. However, WiFi throughput degradation is lower for TDD configurations with larger number of LTE uplink sub-frames and smaller path loss compensation factors. <s> BIB006 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> Among the LTE-A communication techniques, Device-to-Device (D2D) communication which is defined to directly route data traffic between spatially closely located mobile user equipments (UEs), holds great promise in improving energy efficiency, throughput, delay, as well as spectrum efficiency. As a combination of ad-hoc and centralized communication mechanisms, D2D communication enables researchers to merge together the long-term development achievements in previously disjoint domains of ad-hoc networking and centralized networking. To help researchers to have a systematic understanding of the emerging D2D communication, we provide in this paper a comprehensive survey of available D2D related research works ranging from technical papers to experimental prototypes to standard activities, and outline some open research problems which deserve further studies. <s> BIB007 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> Resource allocation and link adaptation in Long-Term Evolution (LTE) and LTE Advanced are discussed with focus on the location and formatting of the pertinent reference and control signals, as well as the decisions they enable. In particular, after reviewing the units for resource allocation and the time-frequency resource grid, the enabled resource-allocation modes and their purposes are reviewed. A detailed description of the way the resource allocations are encoded under these different modes is also given. Similarly, the various methods of link adaptation, including power control and rate control, both through the use of adaptive modulation and coding and hybrid automatic repeat request, are reviewed. The control signaling encoding for link adaptation is provided in detail, as is the encoding of channel state feedback for the purposes of link adaptation and resource allocation. <s> BIB008 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> In this paper, we focus our attention on the cooperative uplink transmissions of systems beyond the LTE-Advanced initiative. We commence a unified treatment of the principle of single-carrier frequency-division multiple-access (FDMA) and the similarities and dissimilarities, advantages, and weakness of the localized FDMA, the interleaved FDMA, and the orthogonal FDMA systems are compared. Furthermore, the philosophy of both user cooperation and cooperative single-carrier FDMA is reviewed. They are investigated in the context of diverse topologies, transmission modes, resource allocation, and signal processing techniques applied at the relays. Benefits of relaying in LTE-Advanced are also reviewed. Our discussions demonstrate that these advanced techniques optimally exploit the resources in the context of cooperative single-carrier FDMA system, which is a promising enabler for various uplink transmission scenarios. <s> BIB009 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> Deployment of LTE in unlicensed bands is being considered in Rel-13 of LTE. This feature is called Licensed-Assisted Access (LAA) using LTE. Unlicensed band is attractive due to the large amount of available spectrum. However, in shared spectrum the coexistence between LAA and WiFi systems becomes a primary challenge. This paper presents an analytical framework to investigate the downlink coexistence performance between two systems with a simple listen-before-talk (LBT) mechanism enforced on LAA. Using this framework, theoretical models based on Markov chains are established for both systems and downlink throughput can be calculated. Numerical results from the models show that the simple listen-before-talk scheme is very effective in LAA and LAA coexistence scenario (i.e. two LAA systems sharing the same spectrum). In LAA and WiFi coexistence scenario, it can improve WiFi performance substantially. <s> BIB010 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) LTE MAC Protocol: <s> The Internet of Things (IoT) aims at connecting a very large number of devices using an Internet-like architecture. Machine-to-machine (M2M) networks are considered the main component of the IoT. Long-term evolution (LTE) and LTE-advanced (LTE-A) are excellent candidates for supporting M2M communications due to their native IP connectivity and scalability for massive number of devices. Therefore, LTE schedulers should be capable of satisfying the needs of M2M devices such as timing constraints and specific quality of service (QoS) requirements. In this paper, we present a survey on uplink scheduling techniques over LTE and LTE-A from an M2M perspective. We focus on the aspects associated with M2M communications; namely, power efficiency, QoS support, multihop connectivity, and scalability for massive number of devices. <s> BIB011
The key enabling technology of LTE systems is orthogonal frequency division multiple access. For better QoS control, transmission spectral efficiency and inter-cell coordination, transmission in LTE has to follow a continuous stream of a deterministic frame structure, i.e., a Radio Frame (RF). An LTE RF consists of ten 1 ms subframes, each one is further divided into two 0.5 ms slots , BIB002 , BIB011 , BIB007 . For further details about LTE frame structure, refer to BIB008 , BIB009 , BIB003 , [20] , and . The LTE system adopts a centralized MAC protocol, which includes a dynamic resource scheduler that allocates physical resources on PHY DL Shared Channel (PDSCH) for data traffic. The scheduler takes into account the traffic volume, the QoS requirement, and the radio channel conditions when sharing the physical resources among mobile devices. For DL data transmissions, the evolved NodeBs (eNBs) transmits the PDSCH resource assignments and their Modulation and Coding Scheme (MCS), on PHY DL Control Channel (PDCCH), and the data packet on the PDSCH accordingly. The mobile device needs to monitor its PDCCH in the control region to discover its grant. Once its PDCCH is detected, the mobile device decodes PDSCH on allocated resources using the MCS provided. For more details about LTE MAC protocol and radio resource management, refer to BIB002 , BIB004 , and BIB011 . 2) Wi-Fi MAC Protocol: A Wi-Fi node, on the contrary, with no need for centralized controller, will first sense the channel whenever it has a pending transmission. The MAC layer of Wi-Fi is based on the Carrier Sense Multiple Accesses with Collision Avoidance (CSMA/CA) mechanism , so Wi-Fi systems do not require a centralized controller as is needed in LTE systems BIB005 . The basic idea of CSMA/CA is to sense the channel to determine whether the wireless medium is busy or not. Only if the channel is sensed to be not busy, or idle, is a Wi-Fi station (STA) permitted to transmit. The CSMA/CA mechanism particularly used in the IEEE 802.11 MAC is also known as the Distributed Coordination Function (DCF), which enables multiple Wi-Fi STAs to access the channel according to the order they start sensing the channel BIB006 , BIB001 . DCF is very effective when the medium is not heavily loaded, since it allows STAs to transmit with minimum delay. However, there is always a chance of collision, i.e., several STAs transmitting at the same time, due to the fact that these STAs sense the medium free and decide to transmit at once. In order to overcome this problem, Wi-Fi uses a collision avoidance mechanism. As a matter of fact, if the medium is free for a specified time, defined as distributed inter frame space, the STA is then allowed to transmit, the receiving STA will check the cyclic redundancy check of the received packet and send an Acknowledgement Packet (ACK). Receipt of the ACK means that no collision occurred. Besides the above mechanism, IEEE 802.11 Wi-Fi standard also defines a virtual carrier sense mechanism. When an STA is willing to transmit a packet, it will first transmit a short control packet called Request to Send (RTS). As a response to the RTS, the destination STA will send a Clear to Send (CTS) back. All STAs receiving RTS/CTS will set their virtual carrier sense indicator, and will use this information together with the physical carrier sense when sensing the medium. For more details about DCF and related collision avoidance mechanisms, refer to . The fundamental difference between LTE and Wi-Fi MAC layers has caused some issues on the coexistence of the two systems BIB010 . We will focus on coexistence challenges and enablers as well as the choice of LTE-LAA and Wi-Fi in Sections III-D and III-E respectively.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> Mobile broadband data usage in Long Term Evolution (LTE) networks is growing exponentially and capacity constraint is becoming an issue. Heterogeneous network, WiFi offload, and acquisition of additional radio spectrum can be used to address this capacity constraint. Licensed spectrum, however, is limited and can be costly to obtain. This paper investigates deploying LTE on a license-exempt band as part of the pico-cell underlay. Coexistence mechanism and other modifications to LTE are discussed. Performance analysis shows that LTE can deliver significant capacity even while sharing the spectrum with WiFi systems. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> This paper provides a holistic overview of multi-cell scheduling strategies in emerging wireless systems. Towards this objective, the evolution of interference management techniques is thoroughly investigated from simple inter-cell interference coordination (ICIC) techniques towards more advanced coordinated multipoint transmissions (CoMP), while comparing and contrasting their common features and differences. Finally CoMP is explored in detail as an advanced and challenging mechanism to fully cooperate between adjacent cells in order to have an efficient resource allocation and inter-cell interference mitigation in multi-cell environments. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> The widely accepted OFDMA air interface technology has recently been adopted in most mobile standards by the wireless industry. However, similar to other frequency-time multiplexed systems, their performance is limited by inter-cell interference. To address this performance degradation, interference mitigation can be employed to maximize the potential capacity of such interference-limited systems. This paper surveys key issues in mitigating interference and gives an overview of the recent developments of a promising mitigation technique, namely, interference avoidance through inter-cell interference coordination (ICIC). By using optimization theory, an ICIC problem is formulated in a multi-cell OFDMA-based system and some research directions in simplifying the problem and associated challenges are given. Furthermore, we present the main trends of interference avoidance techniques that can be incorporated in the main ICIC formulation. Although this paper focuses on 3GPP LTE/LTE-A mobile networks in the downlink, a similar framework can be applied for any typical multi-cellular environment based on OFDMA technology. Some promising future directions are identified and, finally, the state-of-the-art interference avoidance techniques are compared under LTE-system parameters. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> Orthogonal Frequency Division Multiplexing Access (OFDMA) has been increasingly deployed in various emerging and evolving cellular systems to reduce interference and improve overall system performance. However, in these systems Inter-Cell Interference (ICI) still poses a real challenge that limits the system performance, especially for users located at the cell edge. Inter-cell interference coordination (ICIC) has been investigated as an approach to alleviate the impact of interference and improve performance in OFDMA-based systems. A common ICIC technique is interference avoidance in which the allocation of the various system resources (e.g., time, frequency, and power) to users is controlled to ensure that the ICI remains within acceptable limits. This paper surveys the various ICIC avoidance schemes in the downlink of OFDMA-based cellular networks. In particular, the paper introduces new parameterized classifications and makes use of these classifications to categorize and review various static (frequency reuse-based) and dynamic (cell coordination-based) ICIC schemes. <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> In order to satisfy the requirements of future IMT-Advanced mobile systems, the concept of spectrum aggregation is introduced by 3GPP in its new LTE-Advanced (LTE Rel. 10) standards. While spectrum aggregation allows aggregation of carrier components (CCs) dispersed within and across different bands (intra/inter-band) as well as combination of CCs having different bandwidths, spectrum aggregation is expected to provide a powerful boost to the user throughput in LTE-Advanced (LTE-A). However, introduction of spectrum aggregation or carrier aggregation (CA) as referred to in LTE Rel. 10, has required some changes from the baseline LTE Rel. 8 although each CC in LTE-A remains backward compatible with LTE Rel. 8. This article provides a review of spectrum aggregation techniques, followed by requirements on radio resource management (RRM) functionality in support of CA. On-going research on the different RRM aspects and algorithms to support CA in LTE-Advanced are surveyed. Technical challenges for future research on aggregation in LTE-Advanced systems are also outlined. <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> This paper provides an overview of multi-cell coordinated scheduling and multiple-input multiple-output (MIMO) techniques in LTE. We first discuss single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) for LTE and identify potential techniques for further improving the performance of LTE, such as enhanced channel state information feedback and adaptive switching between SU-MIMO and MU-MIMO. Then, we address inter-cell interference mitigation techniques for downlink and uplink, respectively. Finally, potential research challenges and topics in the area are discussed. <s> BIB006 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> High demands for broadband mobile wireless communications and the emergence of new wireless multimedia applications constitute the motivation to the development of broadband wireless access technologies in recent years. The Long Term Evolution/System Architecture Evolution (LTE/SAE) system has been specified by the Third Generation Partnership Project (3GPP) on the way towards fourth-generation (4G) mobile to ensure 3GPP keeping the dominance of the cellular communication technologies. Through the design and optimization of new radio access techniques and a further evolution of the LTE systems, the 3GPP is developing the future LTE-Advanced (LTE-A) wireless networks as the 4G standard of the 3GPP. Since the 3GPP LTE and LTE-A architecture are designed to support flat Internet Protocol (IP) connectivity and full interworking with heterogeneous wireless access networks, the new unique features bring some new challenges in the design of the security mechanisms. This paper makes a number of contributions to the security aspects of the LTE and LTE-A networks. First, we present an overview of the security functionality of the LTE and LTE-A networks. Second, the security vulnerabilities existing in the architecture and the design of the LTE and LTE-A networks are explored. Third, the existing solutions to these problems are classically reviewed. Finally, we show the potential research issues for the future research works. <s> BIB007 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future. <s> BIB008 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> It is forecasted that mobile data traffic will increase 1000-fold from 2010 to 2020. As a result, the network must evolve to meet the large and rapidly growing data demand. An effective approach for increasing network capacity is to use additional spectrum. In Rel-13, LTE deployment in unlicensed spectrum is being considered. This feature is called Licensed-Assisted Access (LAA) using LTE. Unlicensed band is attractive due to the large amount of available spectrum. In this paper, we provide an overview of LAA including standardization progress and potential changes. A fairness mechanism for LAA coexistence is also presented. Downlink system performance is analyzed for multi-operator LAA deployment. Three representative scenarios are considered — indoor hotspot, indoor office, and outdoor small cell. It is shown that LAA can offer significant additional capacity to the network. Analysis shows that a co-existence mechanism such as listen-before-talk is needed in co-channel multi-operator deployment to ensure that all users can receive service. With listen-before-talk, co-existence is not an issue even in dense deployment. <s> BIB009 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> Deployment of LTE in unlicensed bands is being considered in Rel-13 of LTE. This feature is called Licensed-Assisted Access (LAA) using LTE. Unlicensed band is attractive due to the large amount of available spectrum. However, in shared spectrum the coexistence between LAA and WiFi systems becomes a primary challenge. This paper presents an analytical framework to investigate the downlink coexistence performance between two systems with a simple listen-before-talk (LBT) mechanism enforced on LAA. Using this framework, theoretical models based on Markov chains are established for both systems and downlink throughput can be calculated. Numerical results from the models show that the simple listen-before-talk scheme is very effective in LAA and LAA coexistence scenario (i.e. two LAA systems sharing the same spectrum). In LAA and WiFi coexistence scenario, it can improve WiFi performance substantially. <s> BIB010 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> In this paper, a medium access protocol suitably adapted for the operation of Long term evolution (LTE) in unlicensed band (LTE-U) is proposed. The MAC design considers the impact of LTE-U on the devices operating in unlicensed band, such as WiFi and legacy LTE. The LTE-U design differences from the legacy LTE system, an inherently a synchronized, tightly controlled system are explored. The performance of LTE-U is studied via a detailed system level simulator implementation and 3GPP simulation test scenarios. We show that our proposed MAC design enables high throughput in unlicensed band and achieves co-existence with the incumbent systems. Moreover, the use of channel reservation mechanisms is shown to improve LTE-U cell edge performance. <s> BIB011 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> D. Coexistence Challenges and Enablers <s> Fifth-generation (5G) network developers need to identify the necessary requirements toward additional capacity and spectrally efficient wireless technologies. Therefore, the significant amount of underutilized spectrum in the Wi-Fi band is motivating operators to combine long-term evolution (LTE) with Wi-Fi technologies. This new LTE in unlicensed band (LTE-U) has the physical layer topology to access Wi-Fi spectrum, specifically the 5-GHz band. Nevertheless, the evolution of LTE-U affects the Wi-Fi operations due to the absence of any regularity for LTE-U transmissions in unlicensed band. In this article, we address the challenges for Wi-Fi to maintain transmissions under the umbrella of LTE-U as Wi-Fi is pushed offline because of the listen-before-talk (LBT) feature. Therefore, we derive a new adaptive LBT mechanism and and virtualized core network for the best practices in both Wi-Fi and LTE-U technologies. The proposed solutions include noncoordinated and coordinated network managements to enable coexistence between both technologies using tradeoff performance for fair spectrum sharing. We concentrate on the initial coexistent technique and discuss how it maps to higher-layer improvements. This article shows new approaches to achieve the Third-Generation Partnership Project (3GPP) Release 13. <s> BIB012
The main challenge for the coexistence of LTE-LAA and Wi-Fi is while operating LTE-LAA in the presence of Wi-Fi making use of the same band, the performance of Wi-Fi systems will be significantly affected, while the performance of LTE is nearly unchanged since Wi-Fi moves to silence mode due to the CSMA/CA mechanism. That is due to the fact that these two technologies use different channel usage and access procedures. LTE is designed based on the assumption that one operator has exclusive control of a given spectrum. It will continuously transmit with minimum time gap even in the absence of data traffic. LTE also has an almost continuously transmitting protocol, as well as a periodically transmitting protocol to transfer a variety of control and reference signals. Wi-Fi, on the contrary, is designed to coexist with other technologies through random backoff and channel sensing. As a result, Wi-Fi users will have little chance to sense a clear channel and transmit. For more details about LTE channel usage and access procedures, refer to BIB005 , BIB006 - BIB002 , and BIB007 - BIB008 . Studying the MAC implementation of Wi-Fi system can help understand how LTE and Wi-Fi systems can coexist with each other. In fact, the LBT scheme introduced by BIB001 and BIB009 is a simplified version of DCF. In order to enable the coexistence of LTE-LAA and Wi-Fi in unlicensed bands, in such markets where the LBT is mandatory, the coexistence of these two systems can be enabled by LBT enforced on LTE-LAA in unlicensed bands BIB010 . Two design options of LTE-LAA LBT, asynchronous LBT and synchronous LBT, have been proposed in BIB011 . The main difference between them lies in that the asynchronous LBT is based on the current DCF protocol. In this case, the LBT scheme might use IEEE 802.11 RTS/CTS signals to ensure that the channel is idle just at that moment. However, synchronous LBT can be seen as a special version of asynchronous LBT, wherein, data subframes are synchronized with the licensed LTE carrier. This LBT approach may need a smaller number of changes in the LTE specification, and use Inter-cell Interference Coordination (ICIC) already defined in LTE releases to manage the interference among LTE base STAs . A simple way to explain ICIC is based on the scheme of Fig. 3 . The users are divided into two categories, one is CCU shown in Fig. 3(a) , and the other is CEU shown in Fig. 3(b) . CCUs are the users distributed in the gray region of Fig. 3 , and CEUs are the users distributed in the red, green and blue areas. CCU can use all the frequency points to communicate with the base STA, while CEU must use corresponding specified frequency points to ensure orthogonality between different cells. CEUs can be assigned a higher transmission power for the frequency reuse factor greater than 1. The frequency points are not overlapped at the edges so the adjacent cell interference is small. CCU's frequency reuse factor is 1 for the cases where path loss is small and transmission power is low. Therefore the interference to the adjacent cells is not high either. More details about ICIC can be found in BIB002 , BIB003 , and BIB004 . Furthermore, LTE advanced in unlicensed spectrum can also use a coexistence mechanism centralized by CSAT, which is in spirit very similar to DCF. Moreover, enhanced ICIC (eICIC) in 3GPP Release 10 [38] , which is designed to mitigate intra-frequency interference by using various measures in the power, frequency, and also time domain, introduces a concept of Almost Blank Subframes (ABSs) to manage coexistence of the two technologies BIB012 . ABSs are LTE subframes with reduced DL transmission activity or power. The eICIC in time domain introduces a Resource Specific Cell Selection (RS-CS) method. The concept is to have certain sub-frames during which the Macro-eNB is not allowed to transmit data allowing the Pico cell edge users suffering high interference from the Macro-eNB, to be served with better conditions. Transmissions from Macro-eNBs are periodically muted during entire sub-frame. The users associated with the Pico cell can send their data during such an ABS and avoid interference from the Macro cell. In fact the muting is not completed since certain control signals still need to be transmitted even in the muted sub-frames to avoid radio link failure.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> Future generation cellular networks are expected to provide ubiquitous broadband access to a continuously growing number of mobile users. In this context, LTE systems represent an important milestone towards the so called 4G cellular networks. A key feature of LTE is the adoption of advanced Radio Resource Management procedures in order to increase the system performance up to the Shannon limit. Packet scheduling mechanisms, in particular, play a fundamental role, because they are responsible for choosing, with fine time and frequency resolutions, how to distribute radio resources among different stations, taking into account channel condition and QoS requirements. This goal should be accomplished by providing, at the same time, an optimal trade-off between spectral efficiency and fairness. In this context, this paper provides an overview on the key issues that arise in the design of a resource allocation algorithm for LTE networks. It is intended for a wide range of readers as it covers the topic from basics to advanced aspects. The downlink channel under frequency division duplex configuration is considered as object of our study, but most of the considerations are valid for other configurations as well. Moreover, a survey on the most recent techniques is reported, including a classification of the different approaches presented in literature. Performance comparisons of the most well-known schemes, with particular focus on QoS provisioning capabilities, are also provided for complementing the described concepts. Thus, this survey would be useful for readers interested in learning the basic concepts before going into the details of a particular scheduling strategy, as well as for researchers aiming at deepening more specific aspects. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> This paper provides a holistic overview of multi-cell scheduling strategies in emerging wireless systems. Towards this objective, the evolution of interference management techniques is thoroughly investigated from simple inter-cell interference coordination (ICIC) techniques towards more advanced coordinated multipoint transmissions (CoMP), while comparing and contrasting their common features and differences. Finally CoMP is explored in detail as an advanced and challenging mechanism to fully cooperate between adjacent cells in order to have an efficient resource allocation and inter-cell interference mitigation in multi-cell environments. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> The choice of OFDM-based multi-carrier access techniques for LTE marked a fundamental and farsighted parting from preceding 3GPP networks. With OFDMA in the downlink and SC-FDMA in the uplink, LTE possesses a robust and adaptive multiple access scheme that facilitates many physical layer enhancements. Despite this flexibility, scheduling in LTE is a challenging functionality to design, especially in the uplink. Resource allocation in LTE is made complex, especially when considering its target packet-based services and mobility profiles, both current and emerging, in addition to the use of several physical layer enhancements. In this paper, we offer a tutorial on scheduling in LTE and its successor LTE-Advanced. We also survey representative schemes in the literature that have addressed the scheduling problem, and offer an evaluation methodology to be used as a basis for comparison between scheduling proposals in the literature. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future. <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> In order to satisfy the requirements of future IMT-Advanced mobile systems, the concept of spectrum aggregation is introduced by 3GPP in its new LTE-Advanced (LTE Rel. 10) standards. While spectrum aggregation allows aggregation of carrier components (CCs) dispersed within and across different bands (intra/inter-band) as well as combination of CCs having different bandwidths, spectrum aggregation is expected to provide a powerful boost to the user throughput in LTE-Advanced (LTE-A). However, introduction of spectrum aggregation or carrier aggregation (CA) as referred to in LTE Rel. 10, has required some changes from the baseline LTE Rel. 8 although each CC in LTE-A remains backward compatible with LTE Rel. 8. This article provides a review of spectrum aggregation techniques, followed by requirements on radio resource management (RRM) functionality in support of CA. On-going research on the different RRM aspects and algorithms to support CA in LTE-Advanced are surveyed. Technical challenges for future research on aggregation in LTE-Advanced systems are also outlined. <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> High demands for broadband mobile wireless communications and the emergence of new wireless multimedia applications constitute the motivation to the development of broadband wireless access technologies in recent years. The Long Term Evolution/System Architecture Evolution (LTE/SAE) system has been specified by the Third Generation Partnership Project (3GPP) on the way towards fourth-generation (4G) mobile to ensure 3GPP keeping the dominance of the cellular communication technologies. Through the design and optimization of new radio access techniques and a further evolution of the LTE systems, the 3GPP is developing the future LTE-Advanced (LTE-A) wireless networks as the 4G standard of the 3GPP. Since the 3GPP LTE and LTE-A architecture are designed to support flat Internet Protocol (IP) connectivity and full interworking with heterogeneous wireless access networks, the new unique features bring some new challenges in the design of the security mechanisms. This paper makes a number of contributions to the security aspects of the LTE and LTE-A networks. First, we present an overview of the security functionality of the LTE and LTE-A networks. Second, the security vulnerabilities existing in the architecture and the design of the LTE and LTE-A networks are explored. Third, the existing solutions to these problems are classically reviewed. Finally, we show the potential research issues for the future research works. <s> BIB006 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> Support of femtocells is an integral part of the Long Term Evolution - Advanced (LTE-A) system and a key enabler for its wide adoption in a broad scale. Femtocells are short-range, low-power and low-cost cellular stations which are installed by the consumers in an unplanned manner. Even though current literature includes various studies towards understanding the main challenges of interference management in the presence of femtocells, little light has been shed on the open issues of mobility management (MM) in the two-tier macrocell-femtocell network. In this paper, we provide a comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution. A detailed overview of the respective MM procedures in the LTE-A system is also provided to better comprehend the solutions and open issues posed in real-life systems. Based on the discussion for the HO decision phase, we subsequently survey and classify existing HO decision algorithms for the two-tier macrocell-femtocell network, depending on the primary HO decision criterion used. For each class, we overview up to three representative algorithms and provide detailed flowcharts to describe their fundamental operation. A comparative summary of the main decision parameters and key features of selected HO decision algorithms concludes this work, providing insights for future algorithmic design and standardization activities. <s> BIB007 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued. Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies. <s> BIB008 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> Among the LTE-A communication techniques, Device-to-Device (D2D) communication which is defined to directly route data traffic between spatially closely located mobile user equipments (UEs), holds great promise in improving energy efficiency, throughput, delay, as well as spectrum efficiency. As a combination of ad-hoc and centralized communication mechanisms, D2D communication enables researchers to merge together the long-term development achievements in previously disjoint domains of ad-hoc networking and centralized networking. To help researchers to have a systematic understanding of the emerging D2D communication, we provide in this paper a comprehensive survey of available D2D related research works ranging from technical papers to experimental prototypes to standard activities, and outline some open research problems which deserve further studies. <s> BIB009 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> In this paper, we focus our attention on the cooperative uplink transmissions of systems beyond the LTE-Advanced initiative. We commence a unified treatment of the principle of single-carrier frequency-division multiple-access (FDMA) and the similarities and dissimilarities, advantages, and weakness of the localized FDMA, the interleaved FDMA, and the orthogonal FDMA systems are compared. Furthermore, the philosophy of both user cooperation and cooperative single-carrier FDMA is reviewed. They are investigated in the context of diverse topologies, transmission modes, resource allocation, and signal processing techniques applied at the relays. Benefits of relaying in LTE-Advanced are also reviewed. Our discussions demonstrate that these advanced techniques optimally exploit the resources in the context of cooperative single-carrier FDMA system, which is a promising enabler for various uplink transmission scenarios. <s> BIB010 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> Resource allocation and link adaptation in Long-Term Evolution (LTE) and LTE Advanced are discussed with focus on the location and formatting of the pertinent reference and control signals, as well as the decisions they enable. In particular, after reviewing the units for resource allocation and the time-frequency resource grid, the enabled resource-allocation modes and their purposes are reviewed. A detailed description of the way the resource allocations are encoded under these different modes is also given. Similarly, the various methods of link adaptation, including power control and rate control, both through the use of adaptive modulation and coding and hybrid automatic repeat request, are reviewed. The control signaling encoding for link adaptation is provided in detail, as is the encoding of channel state feedback for the purposes of link adaptation and resource allocation. <s> BIB011 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> The phenomenal growth of mobile data demand has brought about increasing scarcity in available radio spectrum. Meanwhile, mobile customers pay more attention to their own experience, especially in communication reliability and service continuity on the move. To address these issues, LTE-Unlicensed, or LTEU, is considered one of the latest groundbreaking innovations to provide high performance and seamless user experience under a unified radio technology by extending LTE to the readily available unlicensed spectrum. In this article, we offer a comprehensive overview of the LTEU technology from both operator and user perspectives, and examine its impact on the incumbent unlicensed systems. Specifically, we first introduce the implementation regulations, principles, and typical deployment scenarios of LTE-U. Potential benefits for both operators and users are then discussed. We further identify three key challenges in bringing LTE-U into reality together with related research directions. In particular, the most critical issue of LTE-U is coexistence with other unlicensed systems, such as widely deployed WiFi. The LTE/WiFi coexistence mechanisms are elaborated in time, frequency, and power aspects, respectively. Simulation results demonstrate that LTE-U can provide better user experience to LTE users while well protecting the incumbent WiFi users’ performance compared to two existing advanced technologies: cellular/WiFi interworking and licensed-only heterogeneous networks (Het-Nets). <s> BIB012 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> We consider the adoption of listen-before-talk (LBT) for Long Term Evolution (LTE) in unlicensed spectrum, which is currently under discussion in the 3rd Generation Partnership Project (3GPP). The first and foremost task to be done to allow LTE to use unlicensed spectrum is to study the coexistence with incumbent system, such as the Wireless Local Area Network (WLAN). In our study, we observe that a proper coexistence mechanism is imperative especially when both LTE Evolved Node Bs (eNBs) and WLANs are located outdoor. The outdoor deployment scenario is considered by the next generation WLAN, the IEEE 802.11ax, with high importance. The implementation of LBT in LTE plays an effective role in this scenario in balancing the performance between LTE and WLAN. In the conventional scenario where WLANs are located indoor and LTE eNBs are located outdoor, a plain coexistence without additional mechanism could be acceptable due to high penetration loss of outdoor walls. <s> BIB013 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Spectral Efficiency: <s> The Internet of Things (IoT) aims at connecting a very large number of devices using an Internet-like architecture. Machine-to-machine (M2M) networks are considered the main component of the IoT. Long-term evolution (LTE) and LTE-advanced (LTE-A) are excellent candidates for supporting M2M communications due to their native IP connectivity and scalability for massive number of devices. Therefore, LTE schedulers should be capable of satisfying the needs of M2M devices such as timing constraints and specific quality of service (QoS) requirements. In this paper, we present a survey on uplink scheduling techniques over LTE and LTE-A from an M2M perspective. We focus on the aspects associated with M2M communications; namely, power efficiency, QoS support, multihop connectivity, and scalability for massive number of devices. <s> BIB014
The following factors are responsible for the improved spectral efficiency of LTE-LAA over Wi-Fi: a) Robust transmission schemes: As stated before, LTE is a synchronous system and uses scheduling-based channel access rather than contention-based random access. LTE-LAA adopts centralized MAC layer to schedule multi-user transmissions based on the user feedback information of the channel qualities, achieving multiuser frequency-selective diversity gain , BIB003 , BIB009 . b) Effective interference management: Interference coordination and avoidance mechanisms, i.e., eICIC and Coordinated Multi-point (CoMP) are adopted in LTE systems to reduce interference and improve spectrum efficiency. CoMP transmission and reception actually refer to a wide range of techniques that requires close coordination among a number of geographically separated eNBs. They dynamically coordinate to provide joint scheduling and transmissions as well as proving joint processing of the received signals. In this way a user at the edge of a cell is able to be served by two or more eNBs to improve signals reception and transmission as well as increase throughput particularly under cell edge conditions BIB014 , BIB010 , BIB004 , [38] . c) Carrier aggregation to manage traffic across licensed and unlicensed channels: LTE carrier aggregation technology, aggregating both licensed bands and unlicensed band, will bring in several benefits. First, higher throughput can be achieved with the help of a wider bandwidth. Second, aggregating multiple carriers not only increases spectrum but also includes trunking gains from dynamically scheduling traffic across the entire spectrum. This in turn increases cell capacity and network efficiency as well as improves the experience for all users. Third, carrier aggregation also leads to an optimum utilization of the operator's spectrum resources. The majority of operators has fragmented spectrum covering different bands and bandwidth. Carrier aggregation helps combine these into more valuable spectrum resource BIB011 , BIB005 , BIB001 . d) Better mobility and coverage support: As stated in Section III-C, LTE-LAA users are operated within a unified architecture since LTE access methods can be used on both licensed and unlicensed spectrum BIB012 . First, a unified architecture means the same core network, and the same integrated authentication, management, and security procedures. Second, synchronization on both spectrum types means that interference bursts can be handled better. Last but not least, PCells can always provide ubiquitous coverage for one user. Only horizontal handover is needed between SC and Macro cell BIB002 , BIB006 , BIB007 . e) HARQ versus ARQ: As for the difference of retransmission mechanisms between LTE and Wi-Fi, LTE can make full use of time-domain receiver diversity with the help of Hybrid Automatic Repeat Request (HARQ) at MAC layer, which has a higher efficiency than singleloop ARQ with ACK used by Wi-Fi due to the receiver combination of retransmissions and small overhead BIB013 . For ARQ, if the received data has an error (as detected by ARQ) then it is discarded, and a new transmission is requested from the sender. For HARQ, if the received data has an error then the receiver buffers the data and requests a re-transmission from the sender. In this case the eNB will perform a retransmission, sending the same copy of the lost packet. Then, the user will try to decode the packet combining the retransmission with the original version, and will send an ACK message to the eNB upon a successfully decoding BIB008 , [20] , BIB004 . 2) Link Adaption: In terms of link adaption, Wi-Fi uses open-loop link adaption without asking for Channel Quality Indicator (CQI) feedback, hence it is incapable of catching up with fast channel/interference fluctuation. On the contrary, LTE can choose resource blocks based on the received CQI . Another impact of using dynamic link adaption based on instantaneous CQI feedback is that, if both technologies employ the same power, the Power Spectral Density (PSD) for LTE is higher than that for Wi-Fi. PSD describes how power of a signal or time series is distributed over frequency, as defined in [55] . This also means, to attain the same PSD, the power consumption of LTE will be much lower than that of Wi-Fi. Power consumption often refers to the electrical energy over time supplied to operate an electrical appliance. 3) Performance Stability: As stated in Section III-B, for LTE-LAA, licensed and unlicensed bands are integrated on the same SC, and only the PCell can carry the control signallings which are granted the highest priority among the nine QoS class identifiers the LTE has defined. The control channel messages are transmitted properly between the base STAs (BSs) and the users. Those features make LTE-LAA be able to better facilitate the opportunistic unlicensed access. Wi-Fi systems, on the contrary, is not efficient especially when the network is heavily loaded , BIB008 , [20] .
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> Mobile broadband data usage in Long Term Evolution (LTE) networks is growing exponentially and capacity constraint is becoming an issue. Heterogeneous network, WiFi offload, and acquisition of additional radio spectrum can be used to address this capacity constraint. Licensed spectrum, however, is limited and can be costly to obtain. This paper investigates deploying LTE on a license-exempt band as part of the pico-cell underlay. Coexistence mechanism and other modifications to LTE are discussed. Performance analysis shows that LTE can deliver significant capacity even while sharing the spectrum with WiFi systems. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> Spectrum sharing is a powerful alternative to deal with the exponential increase on the wireless communication capacity demand. In this context, the coexistence of two of the most prominent wireless technologies today, Long Term Evolution (LTE) and Wi-Fi, is an important research topic. In the most common Wi-Fi network operation, the Distributed Coordination Function (DCF), communication nodes access the channel only if the interference level is below a certain threshold. Then, Wi-Fi operation is severely affected when in coexistence with LTE. This paper proposes the use of LTE uplink (UL) power control to improve LTE/Wi-Fi coexistence. With the introduction of an additional factor to the conventional LTE UL power control, a controlled decrease of LTE UL transmit powers is carried out according to interference measurements, giving opportunity to Wi-Fi transmissions. The proposed LTE UL power control with interference aware power operating point is a flexible tool to deal with the trade-off between LTE and Wi-Fi performances in coexistence, since it is able to set different LTE/Wi-Fi coexistence configurations with the choice of a single parameter. Simulation results show that the proposed approach can provide similar or better performance for both LTE and Wi-Fi networks than a previously proposed interference avoidance mechanism. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> The expansion of wireless broadband access network deployments is resulting in increased scarcity of available radio spectrum. It is very likely that in the near future, cellular technologies and wireless local area networks will need to coexist in the same unlicensed bands. However, the two most prominent technologies, LTE and Wi-Fi, were designed to work in different bands and not to coexist in a shared band. In this article, we discuss the issues that arise from the concurrent operation of LTE and Wi-Fi in the same unlicensed bands from the point of view of radio resource management. We show that Wi-Fi is severely impacted by LTE transmissions; hence, the coexistence of LTE and Wi-Fi needs to be carefully investigated. We discuss some possible coexistence mechanisms and future research directions that may lead to successful joint deployment of LTE and Wi-Fi in the same unlicensed band. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> Systems and methods for Carrier Sense Adaptive Transmission (CSAT) and related operations in unlicensed spectrum are disclosed to reduce interference between co-existing Radio Access Technologies (RATs). The parameters for a given CSAT communication scheme may be adapted dynamically based on received signals from a transceiver for a native RAT to be protected and an identification of how that RAT is utilizing a shared resource such as an unlicensed band. Other operations such as Discontinuous Reception (DRX) may be aligned with a CSAT Time Division Multiplexed (TDM) communication pattern by way of a DRX broadcast / multicast message. Different TDM communication patterns may be staggered in time across different frequencies. Channel selection for a co-existing RAT may also be configured to afford further protection to native RATs by preferring operation on secondary channels as opposed to primary channels. <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> The use of the unlicensed spectrum by LTE networks (LTE-U or LAA-LTE) is being considered by mobile operators in order to satisfy increasing traffic demands and to make better use of the licensed spectrum. However, coexistence issues arise when LTE-U coverage overlaps with other technologies currently operating in unlicensed bands, in particular WiFi. Since LTE uses a TDMA/OFDMA scheduled approach, coexisting WiFi networks may face starvation if the channel is fully occupied by LTE-U transmissions. In this paper we derive a novel proportional fair allocation scheme that ensures fair coexistence between LTE-U and WiFi. Importantly, we find that the proportional fair allocation is qualitatively different from previously consideredWiFi-only settings and that since the resulting allocation requires only quite limited knowledge of network parameters it is potentially easy to implement in practice, without the need for message-passing between heterogeneous networks. <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> We consider the adoption of listen-before-talk (LBT) for Long Term Evolution (LTE) in unlicensed spectrum, which is currently under discussion in the 3rd Generation Partnership Project (3GPP). The first and foremost task to be done to allow LTE to use unlicensed spectrum is to study the coexistence with incumbent system, such as the Wireless Local Area Network (WLAN). In our study, we observe that a proper coexistence mechanism is imperative especially when both LTE Evolved Node Bs (eNBs) and WLANs are located outdoor. The outdoor deployment scenario is considered by the next generation WLAN, the IEEE 802.11ax, with high importance. The implementation of LBT in LTE plays an effective role in this scenario in balancing the performance between LTE and WLAN. In the conventional scenario where WLANs are located indoor and LTE eNBs are located outdoor, a plain coexistence without additional mechanism could be acceptable due to high penetration loss of outdoor walls. <s> BIB006 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> Licensed-assisted access (LAA) is a new operation mode of Long-Term Evolution (LTE) in the unlicensed spectrum currently under study in the 3GPP standardization forum. In order to coexist with Wi-Fi, some of the new functionalities required of LAA LTE include a mechanism for clear channel assessment based on listen-before-talk (LBT), discontinuous transmission on a carrier with limited maximum transmission duration, and dynamic frequency selection (DFS) for radar avoidance in certain frequency bands. This paper presents a detailed overview of the impact of unlicensed spectrum operation on the LTE physical layer architecture, such as downlink physical channel design, scheduling, and radio resource management. System-level simulation results are then presented for indoor and outdoor scenarios, and show that fair coexistence between LAA and Wi-Fi can be achieved and that deployment of LAA can provide a boost in Wi-Fi performance. <s> BIB007 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> The use of Long Term Evolution (LTE) in the unlicensed 5 GHz band, referred to as LTE-U, is a promising enhancement to increase the capacity of LTE networks and meet the requirements foreseen for future systems. Nevertheless, coexistence among several LTE-U and/or Wi-Fi systems in the same band is a key technical challenge to be resolved. In this context, this paper focuses on the channel selection functionality for LTE-U enabled cells to decide the most appropriate channel to use for downlink traffic offloading in the unlicensed band. A distributed Q-learning mechanism that exploits prior experience is proposed to support this functionality, thus enabling coexistence with other systems in a smart and efficient way. The behavior of the proposed approach is illustrated in an indoor scenario with small cells from two different operators. A fully decentralized approach, where the channel selection decision-making is performed independently by each small cell in the scenario, is considered to initially assess the potentials of the Q-learning solution. Promising results are obtained revealing that the throughput achieved by the proposed approach can be between 96% and 99% of the optimum ideal achievable throughput. <s> BIB008 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) An Overview of Coexistence Mechanisms Related <s> With the exponential growth of wireless data traffic and the increasing scarcity of licensed spectrum, mobile network operators intend to use the unlicensed spectrum, such as the Industrial, Scientific and Medical (ISM) bands and TV white space, to offload their traffic. In the future, it is very likely that two of the most prominent wireless technologies, Long Term Evolution (LTE) and WiFi, will coexist on the same unlicensed bands. Existing simulation results show that LTE is slightly impacted by such coexistence while WiFi is severely hampered by LTE transmissions. In order to realize a harmonious LTE/WiFi coexistence, we propose two channel sensing schemes for LTE in unlicensed bands, i.e., periodic sensing and persistent sensing. Simulation results show that the proposed channel sensing schemes can provide a reasonable tradeoff between LTE and WiFi when they coexist with each other. <s> BIB009
Researches: There are some existing works studying the coexistence mechanisms of LTE-LAA and Wi-Fi networks in very recent years. Relevant studies in this paper are overviewed in a logical manner. The community first analyzes the problem of LTE-LAA and Wi-Fi coexistence. For example, in BIB003 , coexistence of LTE-LAA and Wi-Fi in the TV white space band is studied. Simulation results show that in situations where LTE and Wi-Fi nodes are randomly deployed, Wi-Fi throughput can be significantly degraded by LTE interference. In , the results show that channel sharing between Wi-Fi and LTE is unfair for the Wi-Fi network to a great extent. To solve the above challenges, the basic idea of enabling the fair coexistence of LTE-LAA and Wi-Fi networks by adjusting LTE MAC protocols is proposed. In BIB001 , it concludes that LTE-LAA can gain high throughput performance without harming Wi-Fi performance with the proposed MAC mechanisms. However, this conclusion only holds when the coexistence channel model can accurately simulate the interference condition between LTE-LAA and Wi-Fi transmission. Papers like BIB004 mathematically model how LTE would behave if quite period was added to it. They calculated the probability of Wi-Fi's back-off delay is less than LTE-LAA quite period. However, in this paper, authors only consider pure statistical approach, and eliminate PHY layer effects as well as hidden/exposed terminal problems. Papers like BIB005 suggest to divide transmission burst time. This means that the BSs must know the exact number of nodes of LTE-LAA and Wi-Fi. This is challenging if nodes overhear each other. Then, coexistence mechanisms designed for markets with or without LBT requirement are proposed. Table III shows TABLE IV PROGRESS IN LTE UNLICENSED STANDARDIZATION a comparative study of coexistence schemes proposed so far. Papers like BIB006 , BIB001 , , BIB007 , and introduce coexistence algorithm by implementing contention based algorithms in LTE-LAA, i.e., LBT, and add collision avoidance algorithms to LTE-LAA. Specifically, 3GPP is working on the introduction of LBT in the 3GPP standards. Progress in LTE-LAA standardization is shown in Table IV . 3GPP has also defined an LTE-LAA coexistence mechanism in TS 36.213 . An extensive coexistence study of different coexistence mechanism has also been summarized in 3GPP TR 36.889 . However, as will be stated in Section IV-D, LBT introduces extra delay due to the contention time overhead, which can lead to inefficient channel usage. For markets with no LBT requirement, authors in BIB008 and BIB009 propose a Channel Selection (CHS) mechanism to enable the coexistence of LTE-LAA and Wi-Fi. However, as discussed in Section IV-B, LTE-LAA has to hold until the channel becomes idle again in scenarios where no clean channel is available. As a supplement to CHS, CSAT is proposed in BIB004 . The advantage and drawback of CSAT as well as other duty-cycle mechanisms can be found in Section IV-D1. An approach using LTE UL power control to solve the coexistence issue of LTE-LAA and Wi-Fi networks is studied in BIB002 . Simulation results show that the proposed power control mechanism can improve the performance of both types of networks. However, power control mechanism can not solve coexistence problem of LTE-LAA and Wi-Fi in dense deployment scenarios. 2 In comments to the FCC, there are different kinds of suggestions. The first kind is to leave the development of coexistence mechanisms to industry cooperation with the broader unlicensed community, e.g., IEEE 802.11 and the Wi-Fi Alliance rather than regulatory intervention. For instance, in , tests are conducted in an RF isolation chamber with programmable attenuators, with single Wi-Fi AP-client pairs and a single LTE-LAA eNB. Only LTE-LAA transmissions in the unlicensed bands were considered. It concludes that the failure to coexist effectively can be attributed to two factors. One is the effect of LTE-LAA's duty-cycling mechanism on Wi-Fi operation, as will be discussed in Section IV-D. Another is the lack of effective coexistence mechanisms in scenarios where LTE-LAA and Wi-Fi devices receive signals from each other at moderate levels. It even states that LTE-LAA does not have an effective coexistence technique to handle scenarios in which LTE-LAA and Wi-Fi devices hear each other at moderate levels (below -62 dBm) and, as a consequence, Wi-Fi can be crippled in such scenarios. Nevertheless, the accuracies of this claim have been contested by , which reflects the second kind of suggestion that LTE-LAA is a better neighbor to Wi-Fi than other Wi-Fi devices. There are also some neutral opinions. In , a series of tests and demonstrations using eight Wi-Fi routers and gradually changing nodes in form of Wi-Fi or LTE-LAA have been done, arguing that it is unfair to compare Wi-Fi's performance in an interference-free environment to its performance in the presence of LTE-LAA. Instead, a more fair comparison is to evaluate Wi-Fi's performance in the presence of other Wi-Fi nodes. One thing makes clear is that different vendors will be impacted quite differently in the presence of LTE over unlicensed band. There is further a very large set of FCC fillings within this area . Furthermore, in , a significant amount of LTE-U forum testing and technical documentation can be found. However, the most crucial details such as the simulation models are proprietary. The testing has shown result that LTE-LAA behaves as a comparable neighbor to Wi-Fi compared to Wi-Fi as a neighbor, while LTE-LAA significantly outperforms Wi-Fi. All tests in are based on the current IEEE 802.11ac standard. In addition, in , the testing work organised by 3GPP presents the results of a study on the operation of LTE in unlicensed spectrum as an SCell. It shows that with proper and robust coexistence mechanisms, it is possible for LTE-LAA with LBT scheme in 5 GHz to be a good neighbor to Wi-Fi. For example, LTE-LAA causes less adjacent channel interference to a Wi-Fi system compared to another Wi-Fi system. 3GPP also provides some recommendations for the coexistence study in the future based on the testing result. First, it is recommended that the key parameters of the LBT scheme such as contention windows and defer periods should be configurable within limits to enable fair coexistence with other technologies operating in unlicensed spectrum. Second, it also shows that LTE-LAA should support UL LBT at the UE. What's more, to ensure that LTE-LAA and Wi-Fi will coexist well and to address stakeholder questions and concerns, LTE-U Forum has been collaborating with Wi-Fi stakeholders, e.g., the Wi-Fi Alliance, CableLabs and others in the cable industry. In particular, Wi-Fi Alliance has posted the current test plan , and also posted the coexistence guidance . In , tests are developed to ensure fairness to Wi-Fi, and how LTE-LAA equipment passes those tests is immaterial and is not specified. Table V gives the details of each work.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> B. Coexistence Mechanisms in Markets Without LBT Requirement <s> The recent development of regulatory policies that permit the use of TV bands spectrum on a secondary basis has motivated discussion about coexistence of primary (e.g. TV broadcasts) and secondary users (e.g. WiFi users in TV spectrum). However, much less attention has been given to coexistence of different secondary wireless technologies in the TV white spaces. Lack of coordination between secondary networks may create severe interference situations, resulting in less efficient usage of the spectrum. In this paper, we consider two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and address some problems that arise from their coexistence in the same band. We perform exhaustive system simulations and observe that WiFi is hampered much more significantly than LTE in coexistence scenarios. A simple coexistence scheme that reuses the concept of almost blank subframes in LTE is proposed, and it is observed that it can improve the WiFi throughput per user up to 50 times in the studied scenarios. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> B. Coexistence Mechanisms in Markets Without LBT Requirement <s> The expansion of wireless broadband access network deployments is resulting in increased scarcity of available radio spectrum. It is very likely that in the near future, cellular technologies and wireless local area networks will need to coexist in the same unlicensed bands. However, the two most prominent technologies, LTE and Wi-Fi, were designed to work in different bands and not to coexist in a shared band. In this article, we discuss the issues that arise from the concurrent operation of LTE and Wi-Fi in the same unlicensed bands from the point of view of radio resource management. We show that Wi-Fi is severely impacted by LTE transmissions; hence, the coexistence of LTE and Wi-Fi needs to be carefully investigated. We discuss some possible coexistence mechanisms and future research directions that may lead to successful joint deployment of LTE and Wi-Fi in the same unlicensed band. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> B. Coexistence Mechanisms in Markets Without LBT Requirement <s> Innovations enabling efficient spectrum utilization is a key element to optimize user experience with growing data demand. This paper discusses the approach of extending enhancements in cellular technology like LTE to unlicensed band for higher spectral efficiency and better user experience. A key challenge for such extension is the coexistence with legacy technology such as Wi-Fi. The description herein highlights techniques for effective coexistence. The results include evaluation and lab data that demonstrate how the technology provide benefit to surrounding Wi-Fi deployment and contribute towards enhancing spectral efficiency of the unlicensed band. <s> BIB003
In those markets where no LBT is required, with carefully designed coexistence mechanisms, resource sharing between LTE-LAA and Wi-Fi in unlicensed band could be managed fairly without modifying Release 10/11 PHY/MAC standards. LTE-LAA duty cycling is proposed to release resources to the Wi-Fi network. One practical way to implement duty cycling is using coexistence mechanism centralized by CSAT BIB003 , . Another feasible methodology is assisted by ABS BIB002 , , BIB001 .
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Coexistence Mechanism Centralized by CSAT: <s> Systems and methods for Carrier Sense Adaptive Transmission (CSAT) and related operations in unlicensed spectrum are disclosed to reduce interference between co-existing Radio Access Technologies (RATs). The parameters for a given CSAT communication scheme may be adapted dynamically based on received signals from a transceiver for a native RAT to be protected and an identification of how that RAT is utilizing a shared resource such as an unlicensed band. Other operations such as Discontinuous Reception (DRX) may be aligned with a CSAT Time Division Multiplexed (TDM) communication pattern by way of a DRX broadcast / multicast message. Different TDM communication patterns may be staggered in time across different frequencies. Channel selection for a co-existing RAT may also be configured to afford further protection to native RATs by preferring operation on secondary channels as opposed to primary channels. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Coexistence Mechanism Centralized by CSAT: <s> Rapid increases in mobile data demand and inherently limited RF spectrum motivate the use of dynamic spectrum sharing between different radio technologies such as WiFi and LTE, most notably in small cell (HetNet) scenarios. This paper provides a analytical framework for interference characterization of WiFi and LTE for dense deployment scenarios with spatially overlapping coverage. The first model developed is for single LTE and single WiFi access points separated by a specified distance. Results obtained for that model demonstrate the fact that WiFi is significantly degraded by a nearby LTE system, while LTE degradation is minimal as long as the WiFi system is within carrier sense range. A second model for multiple WiFi and multiple LTE systems further demonstrates the fact that LTE causes significant degradation to WiFi and that overall system throughput first increases and then decreases with growing density. Intra- and inter- system channel coordination schemes are considered as a means of improving system performance, and results are presented showing 4-5x gains in system capacity over comparable no coordination cases. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Coexistence Mechanism Centralized by CSAT: <s> The phenomenal growth of mobile data demand has brought about increasing scarcity in available radio spectrum. Meanwhile, mobile customers pay more attention to their own experience, especially in communication reliability and service continuity on the move. To address these issues, LTE-Unlicensed, or LTEU, is considered one of the latest groundbreaking innovations to provide high performance and seamless user experience under a unified radio technology by extending LTE to the readily available unlicensed spectrum. In this article, we offer a comprehensive overview of the LTEU technology from both operator and user perspectives, and examine its impact on the incumbent unlicensed systems. Specifically, we first introduce the implementation regulations, principles, and typical deployment scenarios of LTE-U. Potential benefits for both operators and users are then discussed. We further identify three key challenges in bringing LTE-U into reality together with related research directions. In particular, the most critical issue of LTE-U is coexistence with other unlicensed systems, such as widely deployed WiFi. The LTE/WiFi coexistence mechanisms are elaborated in time, frequency, and power aspects, respectively. Simulation results demonstrate that LTE-U can provide better user experience to LTE users while well protecting the incumbent WiFi users’ performance compared to two existing advanced technologies: cellular/WiFi interworking and licensed-only heterogeneous networks (Het-Nets). <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Coexistence Mechanism Centralized by CSAT: <s> Innovations enabling efficient spectrum utilization is a key element to optimize user experience with growing data demand. This paper discusses the approach of extending enhancements in cellular technology like LTE to unlicensed band for higher spectral efficiency and better user experience. A key challenge for such extension is the coexistence with legacy technology such as Wi-Fi. The description herein highlights techniques for effective coexistence. The results include evaluation and lab data that demonstrate how the technology provide benefit to surrounding Wi-Fi deployment and contribute towards enhancing spectral efficiency of the unlicensed band. <s> BIB004
One example cellular operation consisting of three different techniques has been given in BIB001 . As shown in Fig. 5 , the whole workflow can be divided into three steps. Originally, the CHS performs scanning procedures to classify the different channels based on their conditions. If a clean channel is identified, a corresponding SCell (e.g., LTE-LAA) can be operated without concerning co-channel communications. In practice, if a form of interference is found in the current operating channel, the LTE-LAA transmission will be switched to a cleaner one with less interference by using LTE Release 10/11 procedures. Note the interference level in this example can be measured by energy detection for the sake of simplicity, as is done in BIB002 , where interference types and sources are not considered. On the other hand, in consideration of the existence of multiple incumbent wireless technologies besides Wi-Fi in the unlicensed spectrum such as radar signal and satellite signal, a scheme of high level interference detection of which the sensitivity is improved by collecting the information of the sources types and quantities can be performed BIB003 . CHS is often sufficient to meet the Wi-Fi and LTE-LAA coexistence requirement as long as the traffic density is low . On the contrary, in areas of dense deployments, where no clean channel is available, a further process, i.e., Opportunistic SDL (OSDL) should be utilized to reduce the impact on co-channel communications. Input from CHS algorithms as well as from various measurements, traffic buffers and schedulers is optional by OSDL to find out whether there exists enough traffic to support a secondary carrier or not. If the answer is 'YES', an SCell supporting relevant secondary carrier can be initially enabled in a deconfigured state, then be configured and activated with the help of additional process such as CSAT which is designed to improve the coexistence performance. Otherwise, if no enough traffic is available, SCell will be disabled BIB001 . CSAT has been proposed initially by Qualcomm for LTE-LAA MAC scheduling . During CSAT operation, the SCell remains configured. However, once the traffic level drops below a certain threshold, the SCell will return to the deconfiguration state. The key idea of CSAT is to define a time division multiplexing cycle for the transmission of LTE-LAA in a short duration of time, where CSAT is enabled, namely CSAT ON periods, during which it is available for an SC to transmit at a relatively high power. While in the rest part, also known as CSAT OFF periods, although remains configured, the SC will operate at a relatively low power or even gate off in order to avoid competing with Wi-Fi BIB003 . Measurements of resource utilization performed by user devices and/or small BSs can be utilized as reference materials to help adapt the CSAT parameters accordingly , BIB001 , BIB002 . In another word, one Radio Access Technology (RAT) (e.g., LTE-LAA) needs to request a measurement from another RAT (e.g., Wi-Fi) and to identify its utilization based on the received signals. Fig. 6 shows an example of about how messages exchange between two different RATs during measurements time BIB001 . The whole workflow also consists of three steps. In the first step, the LTE-LAA Self-organizing Network (SON) sends a message to the LTE-LAA stack to notify that a measurement gap is upcoming on the shared unlicensed band and then commands the LTE-LAA radio to temporarily turn off transmission on the unlicensed band. The purpose of this part is to guarantee that LTE-LAA transmission will not interfere with measurements during this time. Sequentially, LTE-LAA SON requests the co-located Wi-Fi SON that a measurement be taken on the unlicensed band by sending a message, which will then command Wi-Fi RF to measure how Wi-Fi is utilizing the unlicensed band currently. In the final step, the measurement report including the results of the measurements goes back to the LTE-LAA SON, which may send permission to LTE-LAA RF and LTE-LAA stack separately in order to turn on LTE-LAA transmission and modify communication. By adjusting those parameters such as the cyclic on/off ratio and transmission powers during the CSAT ON or OFF periods based on the current signaling conditions, resource sharing between LTE-LAA and Wi-Fi in the same unlicensed spectrum can be optimized, thus leads to a better coexistence performance. Take a representative CSAT communications scheme shown in Fig. 7 for example BIB001 , if the utilization of a given channel by Wi-Fi devices needs to be high, the usage of the channel by LTE-LAA radio can be reduced by pulling its transmission power back or bring the cyclic on/off ratio down, and vice versa. The CSAT ON/OFF period duration also differs in various solutions. In some articles like BIB001 this length of the CSAT cycle is designed to be greater than 200 ms to guarantee a sufficient opportunity for user devices to measure the channel condition at least one time. What's more, longer CSAT cycle means higher capacity because of less overhead in carrier activation BIB004 . On the flipside, shorter CSAT cycle reduces latency impact to delay sensitive traffic on Wi-Fi. Like what coexistence specification from LTE-U forum says, the maximum length of CSAT ON/OFF period is 50 ms . Unfortunately, it is contradictory indeed about this time length issue, so far no authoritative result has been reached. In general, coexistence mechanisms centralized by CSAT herein may enjoy several advantages. One example is, as mentioned before, it ensures fair and efficient channel sharing between LTE-LAA node and Wi-Fi APs making use of CHS, OSDL and CSAT as a group. Another big benefit is that such mechanism does not bring any change to the underlying RAT communication protocol , BIB001 . It is no doubt that a weakness remains in CSAT itself, namely its longer latency compared to CSMA. To solve this problem, in one aspect, primary channel occupation by Wi-Fi APs needs to be prevented by CSAT . On the other hand, data punctured subframes inserted periodically in Fig. 7 is also capable of minimizing latency impact BIB004 . In particular, the data punctured subframes makes the CSAT ON period shown ahead be able to be divided into two parts: the short CSAT ON period, i.e., the data puncturing period, and the short CSAT OFF period, i.e., the time period where no data will be transmitted.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 2) Coexistence Mechanism Assisted by ABS: <s> Cellular operators have been offloading data traffic from their licensed bands to unlicensed bands through a large number of WiFi hotspots over the past years. Although this approach improves the cellular network capacity to some extent, it falls short of getting significant throughput gains. In this paper, it is argued that femtocells, covering a short range, can be a perfect platform to jointly exploit the merits of both licensed and unlicensed frequency bands. In particular, a framework is proposed for a femtocell to simultaneously access both licensed and unlicensed bands. The performance of coexisting femtocell and WiFi networks operating over a fully-utilized unlicensed band are analytically modeled and are verified via simulations. Impact of femtocell channel access parameters on the performance of WiFi and cellular networks is also investigated, shedding light on how a femtocell can best adjust its channel access parameters to coexist with incumbent unlicensed spectrum users like WiFi networks. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 2) Coexistence Mechanism Assisted by ABS: <s> This paper presents the system performance analysis of 3GPP Long-Term Evolution (LTE) and IEEE 802.11 Wireless Local Area Networks (WLAN) in a situation where LTE downlink (DL) has been expanded over to unlicensed frequency band usually used by WLAN. Simple fractional bandwidth sharing mechanism is used to allow both technologies to transmit. The system performance is evaluated by means of fully dynamic network simulations. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 2) Coexistence Mechanism Assisted by ABS: <s> The expansion of wireless broadband access network deployments is resulting in increased scarcity of available radio spectrum. It is very likely that in the near future, cellular technologies and wireless local area networks will need to coexist in the same unlicensed bands. However, the two most prominent technologies, LTE and Wi-Fi, were designed to work in different bands and not to coexist in a shared band. In this article, we discuss the issues that arise from the concurrent operation of LTE and Wi-Fi in the same unlicensed bands from the point of view of radio resource management. We show that Wi-Fi is severely impacted by LTE transmissions; hence, the coexistence of LTE and Wi-Fi needs to be carefully investigated. We discuss some possible coexistence mechanisms and future research directions that may lead to successful joint deployment of LTE and Wi-Fi in the same unlicensed band. <s> BIB003 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 2) Coexistence Mechanism Assisted by ABS: <s> Femtocell and WiFi play crucial roles in sustaining the continued growth in mobile traffic. Deploying Femtocells in WiFi hotspots would allow the access providers to provide more capacity for users and improve their quality of experience during mobility. Hence, the co-existence of Femtocell and WiFi carries critical importance for improving the total performance of the users and meeting the promised quality of service (QoS) satisfaction of Femtocell end users. In this paper, we propose and develop a framework allowing to make use of unlicensed band and to increase the total throughput of Femtocells while offloading the traffic of Femtocell users to unlicensed bands in case of severe interference with Macrocell. The channel access of both Femtocell and WiFi networks are analytically modeled and numerically verified. Moreover, the effects of WiFi channel access parameters on the performance of WiFi and Femtocell networks are investigated. Numerical evaluation of our proposed scheme show that by adequately tuning and giving priority, the throughput of small cells and utilization of unlicensed spectrum have been improved. <s> BIB004 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 2) Coexistence Mechanism Assisted by ABS: <s> The phenomenal growth of mobile data demand has brought about increasing scarcity in available radio spectrum. Meanwhile, mobile customers pay more attention to their own experience, especially in communication reliability and service continuity on the move. To address these issues, LTE-Unlicensed, or LTEU, is considered one of the latest groundbreaking innovations to provide high performance and seamless user experience under a unified radio technology by extending LTE to the readily available unlicensed spectrum. In this article, we offer a comprehensive overview of the LTEU technology from both operator and user perspectives, and examine its impact on the incumbent unlicensed systems. Specifically, we first introduce the implementation regulations, principles, and typical deployment scenarios of LTE-U. Potential benefits for both operators and users are then discussed. We further identify three key challenges in bringing LTE-U into reality together with related research directions. In particular, the most critical issue of LTE-U is coexistence with other unlicensed systems, such as widely deployed WiFi. The LTE/WiFi coexistence mechanisms are elaborated in time, frequency, and power aspects, respectively. Simulation results demonstrate that LTE-U can provide better user experience to LTE users while well protecting the incumbent WiFi users’ performance compared to two existing advanced technologies: cellular/WiFi interworking and licensed-only heterogeneous networks (Het-Nets). <s> BIB005 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 2) Coexistence Mechanism Assisted by ABS: <s> The use of the unlicensed spectrum by LTE networks (LTE-U or LAA-LTE) is being considered by mobile operators in order to satisfy increasing traffic demands and to make better use of the licensed spectrum. However, coexistence issues arise when LTE-U coverage overlaps with other technologies currently operating in unlicensed bands, in particular WiFi. Since LTE uses a TDMA/OFDMA scheduled approach, coexisting WiFi networks may face starvation if the channel is fully occupied by LTE-U transmissions. In this paper we derive a novel proportional fair allocation scheme that ensures fair coexistence between LTE-U and WiFi. Importantly, we find that the proportional fair allocation is qualitatively different from previously consideredWiFi-only settings and that since the resulting allocation requires only quite limited knowledge of network parameters it is potentially easy to implement in practice, without the need for message-passing between heterogeneous networks. <s> BIB006
Another mechanism called LTE muting in spirit similar to CSAT has also been proposed, which is summarized as avoiding different RATs accessing the channel at the same time, i.e., in n of every 5 subframes, LTE-LAA needs to be turned off, and Wi-Fi users will replace LTE-LAA nodes in using channel resource BIB005 . Another example of fair allocation scheme is to assign equal channel time to every competing entity including idle periods, successful transmissions and collisions for the Wi-Fi network BIB006 . Moreover, Wi-Fi users may spend a lot of time in backoff if there are a lot of users trying to access the network at the same time. The Wi-Fi performance would not necessarily degrade if LTE-LAA could exploit those silent times BIB002 . In those examples, the communication among different network techniques, utilized to adapt CSAT parameters and cannot always be ensured when devices belong to different operators, is not required. These time-sharing coexistence techniques requiring LTE silent periods would exploit ABSs, a key feature introduced in Release 10 as a base BIB003 . ABSs are LTE subframes with reduced DL transmission activity or power. By muting the transmission power of the SCs in certain subframes, interference caused by Macro eNBs to Pico eNBs would be less in HetNets BIB005 , BIB003 . Building on this work, a probability for LTE-LAA to access the channel is defined in BIB001 - BIB004 . A survey involving the summary of an example coexistence mechanism assisted by ABS has also been published BIB003 . It is concluded that LTE-LAA activities in unlicensed spectrum can be controlled with the help of a modified version of ABS, where UL and/or DL subframes can be silenced, and no LTE common reference signals are included. It is shown that Wi-Fi is able to reuse the blank subframes ceded by LTE, and that throughput increases with the number of null-subframes. However, since LTE throughput decreases almost proportionally to the number of ceded blank subframes, a tradeoff is established. Additional LTE performance degradation may be observed if blank subframes are nonadjacent, since Wi-Fi transmissions are not completely confined within LTE silent modes. However, if the duration and occurrence of LTE blank subframes is reported to Wi-Fi during the negotiation phase, Wi-Fi nodes may be able to conveniently confine their transmissions within blank subframes and thus avoid interference with LTE.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> C. Coexistence Mechanisms in Markets With LBT Requirement <s> We consider the adoption of listen-before-talk (LBT) for Long Term Evolution (LTE) in unlicensed spectrum, which is currently under discussion in the 3rd Generation Partnership Project (3GPP). The first and foremost task to be done to allow LTE to use unlicensed spectrum is to study the coexistence with incumbent system, such as the Wireless Local Area Network (WLAN). In our study, we observe that a proper coexistence mechanism is imperative especially when both LTE Evolved Node Bs (eNBs) and WLANs are located outdoor. The outdoor deployment scenario is considered by the next generation WLAN, the IEEE 802.11ax, with high importance. The implementation of LBT in LTE plays an effective role in this scenario in balancing the performance between LTE and WLAN. In the conventional scenario where WLANs are located indoor and LTE eNBs are located outdoor, a plain coexistence without additional mechanism could be acceptable due to high penetration loss of outdoor walls. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> C. Coexistence Mechanisms in Markets With LBT Requirement <s> With the exponential growth of wireless data traffic and the increasing scarcity of licensed spectrum, mobile network operators intend to use the unlicensed spectrum, such as the Industrial, Scientific and Medical (ISM) bands and TV white space, to offload their traffic. In the future, it is very likely that two of the most prominent wireless technologies, Long Term Evolution (LTE) and WiFi, will coexist on the same unlicensed bands. Existing simulation results show that LTE is slightly impacted by such coexistence while WiFi is severely hampered by LTE transmissions. In order to realize a harmonious LTE/WiFi coexistence, we propose two channel sensing schemes for LTE in unlicensed bands, i.e., periodic sensing and persistent sensing. Simulation results show that the proposed channel sensing schemes can provide a reasonable tradeoff between LTE and WiFi when they coexist with each other. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> C. Coexistence Mechanisms in Markets With LBT Requirement <s> Unlicensed LTE system holds high potential to effectively offload data traffic/video traffic from the crowded LTE networks. Listen-before-talk is generally necessary for unlicensed LTE to coexist with existing transmissions in the unlicensed band (e.g. wireless LAN). This paper studies the interaction of clear channel assessment (how hard one should listen before talk) and transmit power control (how loud one should talk) in unlicensed LTE systems. <s> BIB003
In many markets where LBT requirement exists, various modifications are required to adapt LTE PHY/MAC. For example, LBT using Clear Channel Assessment (CCA) to determine if a particular channel is available is needed to meet regional requirement. The concept of beacon signal is also introduced to reserve the channel for transmission following LBT . A node having data to transmit should perform a CCA first to determine the availability of the spectrum band, i.e., whether the channel is clean or already occupied by other signals transmitted by other operators or radar. If clean channel is available, this CCA procedure will contend for use of the radio frequency spectrum band. Upon the successful first . A subframe S (e.g., subframe 9) consists of a GP, several slots for CCA placement and a node for CUBS for the remaining symbols. A GP is provided prior to the CCA placement to guarantee the idle time. If the CCA procedure succeeds, the node will seize and hold the medium until the start of the next subframe S, and CUBS may block the transmission signals of other users nearby. CCA procedure, one or more additional CCA procedures will be performed during Discontinuous Transmission (DTX) periods to determine continued availability of the radio frequency spectrum band BIB001 , . Fig. 8 shows a case of CCA placement options in an example of DL frame structure . Subframe S (e.g., subframe 9) may be used to hold the succeeding transmission resources. It may work as CCA, DTX, or Channel Usage Beacon Signals (CUBS). A subframe S consists of a Guard Period (GP), several slots for CCA placement and a node for CUBS for the remaining symbols. A GP is provided prior to the CCA placement to guarantee the idle time. The number of slots for CCA placement varies in different papers, even as little as 2 in BIB002 . However, as is emphasized in , the number of slots for CCA placement may be referred to as a CCA reuse factor, which can be 3, 4, 7, 9 or 12. The reuse factor adopted in Fig. 8 is 7 . If the CCA procedure succeeds, which means the node will grab and hold the medium until the start of the next subframe S, CUBS may block the transmission signals of other users nearby by notifying other nodes also performing CCA later in the same subframe S that the medium has been occupied. It is necessary to set CCA threshold appropriately for the purpose of protecting nearby WLAN transmissions. The ability for devices to coexist is highly dependent upon their ability to detect another at lower RF levels. Raising the threshold helps protect smaller area around eNB and implies sensing. However, if the LBT threshold is too high, the case with LBT will become ineffective since it turns to be equivalent to the one without LBT. Lowing the threshold will lead to wider covering area, but reducing the chance for the eNB to transmit at the same time BIB001 . The CCA threshold also varies with two types of CCA techniques designed in IEEE 802.11 specification, energy based CCA and preamble based CCA. In the former case, the transmitter only measures the total received power and does not require any knowledge of the signal structure or packet format. The preamble based CCA, on the other hand, is the one achieved by a cross correlation module. In IEEE 802.11, the transmitter will declare the channel as busy when the total received power is larger than -62 dBm while using energy based CCA in 20 MHz. This threshold value changes to -82 dBm while using preamble based CCA. Since in LTE-LAA, either energy based CCA or preamble based CCA, or even both may be used, CCA threshold should also be set carefully in different scenarios BIB003 . During example DTX periods shown in Fig. 9 [87], upon the successful first CCA procedure, one or more second CCA procedures may be performed to determine continued availability of the radio frequency band. If the first CCA does not succeed, the eNB will not transmit, nor will it perform any CCA until the next transmission period, either. On the contrary, if it succeeds, while the second CCA procedures fail during one DTX period, the transmission will stop until a subsequent second CCA indicates that the radio frequency band is available again.
Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Lessons Learnt from Different Coexistence Mechanisms <s> Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio. <s> BIB001 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Lessons Learnt from Different Coexistence Mechanisms <s> In this work we present a case for dynamic spectrum sharing between different operators in systems with carrier aggregation (CA) which is an important feature in 3GPP LTE-A systems. Cross-carrier scheduling and sensing are identified as key enablers for such spectrum sharing in LTE-A. Sensing is classified as Type 1 sensing and Type 2 sensing and the role of each in the system operation is discussed. The more challenging Type 2 sensing which involves sensing the interfering signal in the presence of a desired signal is studied for a single-input single-output system. Energy detection and the most powerful test are formulated. The probability of false alarm and of detection are analyzed for energy detectors. Performance evaluations show that reasonable sensing performance can be achieved with the use of channel state information, making such sensing practically viable. <s> BIB002 </s> Coexistence of LTE-LAA and Wi-Fi on 5 GHz With Corresponding Deployment Scenarios: A Survey <s> 1) Lessons Learnt from Different Coexistence Mechanisms <s> Unlicensed spectrum, around 5 GHz, will be introduced to Long Term Evolution (LTE) systems, referred to as LTEUnlicensed (LTE-U), to combat the explosive growth of traffic volume in next 10 years. In this paper, the interference-controlled power allocation problem is studied for LTE-U systems, which can be inherently classified as orthogonal frequency division multiplexing (OFDM)-based cognitive radio (CR) systems, where optimal power allocation algorithms are currently available by resorting to computation-intensively numerical iterations with a moderate risk of divergence. In order to satisfy the rigorous algorithm requirements, i.e., convergent outputs of power allocation and running time of milliseconds, for practical LTE-U deployments, a cap-limited waterfilling method is proposed to regulate the interference to primary users one by one successively, by which not only a near-optimal solution can be obtained, but also the intractable iteration divergence and computation complexity issues can be excluded completely. Simulation results indicate the capacity performance of the proposed low-complexity method approaches to the optimal solution with a slight loss less than 5%, and is remarkably superior to the existing suboptimal methods. <s> BIB003
Comparison: In general, for markets where no LBT is required, LTE-LAA's primary coexistence mechanisms can be summarized as duty-cycling, i.e., cycling LTE-LAA through ON/OFF periods. The main advantage of duty-cycling is that it requires fewer changes from LTE and does not require any ad-hoc standardization effort. The availability is attractive to operators who need to increase capacity in a short term, especially if they plan to deploy LTE-LAA in environments where there are free channels are available and hence fair coexistence with Wi-Fi is easy to achieve. However, duty-cycling itself has some weakness, as stated in Section IV-B, while using duty-cycling, it is the LTE-LAA cell that decides how much fairness to allow, and Wi-Fi networks can only adapt to the rules set by LTE-LAA. In other words, it is the LTE-LAA device that controls the ON/OFF cycle. Due to Fig. 9 . Illustration of DTX periods and DL CCA intervals . If the first CCA lose contention, the eNB will not transmit, nor will it perform any CCA until the next transmission period, either. On the contrary, if it wins contention, while the second CCA procedures fail during one DTX period, the transmission will stop until a subsequent second CCA indicates that the radio frequency band is available again. this situation, duty-cycling may lead to a poor performance of Wi-Fi devices. Furthermore, although longer LTE-LAA OFF times can lead to a lower percentage of errors and thus excellent throughput, for a better delay and latency performance of Wi-Fi devices, shorter LTE-ON time is needed. LTE-LAA duty-cycle parameters may affect Wi-Fi performance, thus selection of cycle period is critical to the performance on Wi-Fi network . As shown in Fig. 7 , data gaps that can be punctured into data punctured subframes and inserted periodically are also capable of resolving this conflict by minimizing latency impact to delay sensitive traffic on Wi-Fi. However, new challenges will arise with the introduction of this method. First, the introduction of these gaps can exacerbate the rate control problem. Second, delay-critical frames may not be transmitted during the short gaps. Compared to duty-cycling, the addition of LBT will bring several benefits. For example, LTE-LAA with LBT requirement will degrade performance and hence reduce the benefits of LTE-LAA over Wi-Fi, thus will improve Wi-Fi throughput . What's more, LBT itself allows for a distribution of spectrum resources that takes the traffic load of each coexisting network into account. On the other hand, LTE-LAA with LBT also has some weaknesses. As stated in Table IV , the LBT standardization was just completed in March 2016, so LBT is more onerous to implement than duty-cycling. What's more, the impact of Wi-Fi would vary on how LBT is implemented. In fact, 3GPP designs four kinds of channel access schemes : a) No LBT: No LBT procedure is performed by the transmitting entity. b) LBT without random backoff: It means the duration of time that the channel is sensed to be idle before the transmitting entity transmits is deterministic. c) LBT with random backoff in a contention window of fixed size: The LBT procedure has the following procedures as one of its components. The transmitting entity draws a random number N within a contention window. The size of the contention window is specified by the minimum and maximum value of N. The size of the contention window is fixed. The random number N is used in the LBT procedure to determine the duration of time that the channel is sensed to be idle before the transmitting entity transmits on the channel. d) LBT with random backoff in a contention window of variable size: The LBT procedure has the following procedures as one of its components. The transmitting entity draws a random number N within a contention window. The size of the contention window is specified by the minimum and maximum value of N. The transmitting entity can vary the size of the contention window when drawing the random number N, which is used in the LBT procedure to determine the duration of time that the channel is sensed to be idle before the transmitting entity transmits on the channel. Wi-Fi performance itself benefits from a variable backoff periods. Nevertheless, 3GPP is also considering LTE-LAA using a fixed backoff periods. Table III further shows a comparative study of proposed LBT schemes. Generally speaking, duty-cycling mechanisms are commonly regarded as being more aggressive and unfair than LBT because it does not abide by the same rules as Wi-Fi. However, adding LBT to LTE-LAA may takes away LTE advantages. It is also shown that the choice of channel access schemes real really makes sense, i.e., not all LBT schemes providing fair coexistence . The introduction of LBT also requires MAC/PHY modifications, as discussed in Section IV-C. For more details, refer to Table VI. 2) Lessons Learnt from Cognitive Radio: To ensure fairness, the unlicensed spectrum is supposed to be shared without preference. Although coexistence mechanisms have been designed to ensure that the existing systems are minimally interfered, potential interference could still appear in existing systems. The interference will occur when primary system begins to transmit right or shortly after the secondary system starts the transmission. For different RANs in unlicensed spectrum, the Wi-Fi users can be regarded as the primary users (PUs) since Wi-Fi is the prevalent technology using 5 GHz. If subsequent users such as LTE-LAA users, referred to as Since CR is initially designed in exploiting white spaces including unlicensed spectrum efficiently, it is nature to utilize the attributes of the CR to optimize LTE-LAA in 5 GHz. That means frequency-agile modems that can rapidly switch channels if interference is present, are needed. The FCC defines CR as the radio that can change its transmission parameters based on interaction with the environment where it operates BIB001 . The main goal of CR is to identify the unused licensed spectrum for SU without causing interference to the PU. CR involves both spectrum sensing and channel switching techniques. Spectrum sensing is the ability to measure, sense and be aware of the parameters related to the radio channel characteristics. Spectrum sensing in CR networks is done for two purpose, one is to identify the spectrum opportunities, the other is to detect the interference in the spectrum. Channel switching techniques include predictive channel switching, random channel switching and optimal channel switching. Predictive channel switching mechanism calculates the remaining idle time of each channel and the channel with the largest remaining idle time is selected for switching. Random channel switching makes the selection in random manner when the interference occurs. In optimal channel switching scheme the channel that is free and offers longer remaining idle time is selected for switching. LTE-LAA in 5 GHz can be regarded as a special case of OFDM-based CR systems. There are also several works focusing on LTE and LTE advanced networks along with CR. For example, in BIB002 , CR is applied to sense the spectrum by using the conventional method of energy detection. In , the authors focus on improving resource efficiency in LTE network by considering CR device to device communication links. However, it seems the current available mathematicallyoptimal algorithms are not suitable for the implementation of LTE-LAA systems, due to potential iteration divergence and computation load BIB003 . We recommend that researchers focus on the LTE-LAA CR technique design.
A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> A motor mount assembly is provided which is particularly adapted for employment in an electric powered motorcycle for mounting the motor thereof thereon. The motor mount assembly is operable for mounting the motor of the electric powered motorcycle so as to enable the motor to move relative to the frame of the motorcycle. The motor mount assembly includes a mount and means for pivotably securing the mount on the motorcycle. The mount which is generally rectangular in configuration embodies a planar portion from which a pair of side rails extend outwardly in spaced parallel relation therefrom at one end thereof. The planar portion is suitably dimensioned so as to be capable of receiving the motor thereon in fixed relation thereto. The free end of each of the side rails is affixed to one of the axles of the motorcycle whereby to be movable relative thereto. One of the side rails of the motor mount assembly intermediate the ends thereof is provided with a strap-like member having one end fastened thereto and which is operable as a retainer for the brake drum of the electric powered motorcycle. In addition, the aforereferenced one of the pair of side rails is also provided intermediate the ends thereof with an upstanding triangularly-shaped member having a screw threadedly engaged thereto adjacent the apex thereof. The latter triangularly-shaped member which has its base fastened to the side rail functions as a holder for one end of the brake cable with the screw being operable for adjustment purposes. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Humans and other animals often engage in activities for their own sakes rather than as steps toward solving practical problems. Psychologists call these intrinsically motivated behaviors. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy. At the core of the model are recent theoretical and algorithmic advances in computational reinforcement learning, specifically, new concepts related to skills and new learning algorithms for learning with skill hierarchies. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Neuroscientific and psychological data suggest a close link between affordance and mirror systems in the brain. However, we still lack a full understanding of both the individual systems and their interactions. Here, we propose that the architecture and functioning of the two systems is best understood in terms of two challenges faced by complex organisms, namely: (a) the need to select among multiple affordances and possible actions dependent on context and high-level goals and (b) the exploitation of the advantages deriving from a hierarchical organisation of behaviour based on actions and action-goals. We first review and analyse the psychological and neuroscientific literature on the mechanisms and processes organisms use to deal with these challenges. We then analyse existing computational models thereof. Finally we present the design of a computational framework that integrates the reviewed knowledge. The framework can be used both as a theoretical guidance to interpret empirical data and design new experiments, and to design computational models addressing specific problems debated in the literature. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Statistical spoken dialogue systems have the attractive property of being able to be optimised from data via interactions with real users. However in the reinforcement learning paradigm the dialogue manager (agent) often requires significant time to explore the state-action space to learn to behave in a desirable manner. This is a critical issue when the system is trained on-line with real users where learning costs are expensive. Reward shaping is one promising technique for addressing these concerns. Here we examine three recurrent neural network (RNN) approaches for providing reward shaping information in addition to the primary (task-orientated) environmental feedback. These RNNs are trained on returns from dialogues generated by a simulated user and attempt to diffuse the overall evaluation of the dialogue back down to the turn level to guide the agent towards good behaviour faster. In both simulated and real user scenarios these RNNs are shown to increase policy learning speed. Importantly, they do not require prior knowledge of the user's goal. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Scaling end-to-end reinforcement learning to control real robots from vision presents a series of challenges, in particular in terms of sample efficiency. Against end-to-end learning, state representation learning can help learn a compact, efficient and relevant representation of states that speeds up policy learning, reducing the number of samples needed, and that is easier to interpret. We evaluate several state representation learning methods on goal based robotics tasks and propose a new unsupervised model that stacks representations and combines strengths of several of these approaches. This method encodes all the relevant features, performs on par or better than end-to-end learning, and is robust to hyper-parameters change. <s> BIB008 </s> A survey on intrinsic motivation in reinforcement learning <s> Introduction <s> Robots must know how to be gentle when they need to interact with fragile objects, or when the robot itself is prone to wear and tear. We propose an approach that enables deep reinforcement learning to train policies that are gentle, both during exploration and task execution. In a reward-based learning environment, a natural approach involves augmenting the (task) reward with a penalty for non-gentleness, which can be defined as excessive impact force. However, augmenting with only this penalty impairs learning: policies get stuck in a local optimum which avoids all contact with the environment. Prior research has shown that combining auxiliary tasks or intrinsic rewards can be beneficial for stabilizing and accelerating learning in sparse-reward domains, and indeed we find that introducing a surprise-based intrinsic reward does avoid the no-contact failure case. However, we show that a simple dynamics-based surprise is not as effective as penalty-based surprise. Penalty-based surprise, based on predicting forceful contacts, has a further benefit: it encourages exploration which is contact-rich yet gentle. We demonstrate the effectiveness of the approach using a complex, tendon-powered robot hand with tactile sensors. Videos are available at this http URL. <s> BIB009
In reinforcement learning (RL), an agent learns by trialsand-errors to maximize the expected rewards gathered as a result of its actions performed in the environment BIB005 ]. Traditionally, to learn a task, an agent maximizes a reward defined according to the task to perform: it may be a score when the agent learns to solve a game or a distance function when the agent learns to reach a goal. The reward is then considered as extrinsic (or is a feedback) because the reward function is provided expertly and specifically for the task. With an extrinsic reward, many spectacular results have been obtained on Atari game with the Deep Q-network (DQN) or on the game of Go with AlphaGo Zero BIB007 through the integration of deep learning to RL, leading to the name of deep reinforcement learning (DRL). However, these approaches turn out to be most of the time unsuccessful when the rewards are too sparse in the environment, as the agent is then unable to learn the desired behavior for the task . Moreover, the behaviors learned by the agent are hardly reusable, both within the same task and across many different tasks . It is difficult for an agent to generalize its skills so as to learn to take highlevel (or abstract) decisions in the environment. For example, such abstract decision could be go to the door using action primitives (or low-level actions) consisting in moving in the four cardinal directions; or even to move forward controlling different joints of a humanoid robot like in the robotic simulator MuJoCo BIB004 . Such abstract decisions are often called options BIB002 . Options have to be learned, but there are potentially an infinite number of options in real-world-like simulator and some are more complex than others. For example, a robot should learn to grasp an object before learning to put it into a box; it should also learn to reach the kitchen door before learning to reach the sink from the bedroom. In fact this is an exploration problem in the space of options rather than states (as described above); therefore if the agent do not consider the order of tasks, its learning will take longer than if he took the order into consideration. This issue is currently studied by curriculum learning . In addition, it appears that classical DRL algorithms as well as these unresolved issues could strongly benefit from a good state representation BIB008 (see §4). Unlike RL, developmental learning BIB001 BIB006 is based on the trend that babies, or more broadly organisms, have to spontaneously explore their environment and acquire new skills . This is commonly called intrinsic motivation, which can be derived from an intrinsic reward. This kind of motivation allows to gain autonomously new knowledge and skills , which then make the learning process of new tasks easier BIB003 . This paradigm offers a greater learning flexibility, through the use of a more general reward function, allowing to tackle the issues raised above when only an extrinsic reward is used. Typically, we will see that intrinsic motivation improves the agent's ability to explore its environment, to incrementally learn skills (options) independently of its main task, to choose an adequate skill to improve and even to create a representation of its state with meaningful properties. For several years now, intrinsic motivation is increasingly used in RL, fostered by important results and the emergence of deep learning. In this article, we propose a study of the use of intrinsic motivation in the framework of deep reinforcement learning. More particularly, we address the following questions: • How to characterize intrinsic motivation? • How to integrate intrinsic motivation into the framework of RL ? • What role does intrinsic motivation play towards the above mentioned challenges? • What are the actual limitations of the use of intrinsic motivation in RL, and the associated challenges? An other contribution of this article is to provide an unified view of the state-of-the-art based on information theory and compression theory. Moreover, we propose an indepth analysis of intrinsic motivation in DRL linking these methods and developmental learning. Specifically, we propose a general developmental architecture unifying all the approaches and highlighting the numerous perspectives in this domain. Our study is not meant to be exhaustive. It is rather a review of current ongoing research directions, their limitations and potential perspectives. The overall literature on intrinsic motivation is huge and our review only considers its application to deep reinforcement learning. We highlight how intrinsic motivation can improve over state of the art DRL algorithms, scaling to large state and action dimension spaces. In addition to that, our review does not cover works on emotional intrinsic motivations, as a recent state of the art is available and some social intrinsic motivations which are specific to multi-agent RL BIB009 . This survey paper is organized as follows. As a first step, we introduce the basic concepts used in the rest of the paper, namely markov decision processes, goalparameterized RL, the bases of information theory, intrinsic motivation and empowerment (Section 2). In Section 3, we highlight the main current challenges of RL and identify their common source. This brings us to explain how to combine intrinsic motivation and RL. Then we detail the work integrating RL and intrinsic motivation by first studying articles relying on knowledge acquisition (Section 4) and second those based on skills construction (Section 5). Thereafter, we emphasize actual challenges of these models (Section 6). Finally, we take a step back and analyze common aspects to those methods and propose their integration in a developmental learning framework (Section 7).
A survey on intrinsic motivation in reinforcement learning <s> Goal-parameterized RL <s> Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Goal-parameterized RL <s> Abstract Typical reinforcement learning (RL) can only perform a single task and thus cannot scale to problems for which an agent needs to perform multiple tasks, such as moving objects to different locations, which is relevant to real-world environments. Hindsight experience replay (HER) based on universal value functions shows promising results in such multi-goal settings by substituting achieved goals for the original goal, frequently giving the agent rewards. However, the achieved goals are limited to the current policy level and lack guidance for learning. We propose a novel guided goal-generation model for multi-goal RL named G-HER. Our method uses a conditional generative recurrent neural network (RNN) to explicitly model the relationship between policy level and goals, enabling the generation of various goals conditions on the different policy levels. Goals generated with a higher policy level provide better guidance for the RL agent, which is equivalent to using knowledge of successful policy in advance to guide the learning of current policy. Our model accelerates the generalization of substitute goals to the whole goal space. The G-HER algorithm is evaluated on several robotic manipulating tasks and demonstrates improved performance and sample efficiency. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Goal-parameterized RL <s> Goal-oriented reinforcement learning has recently been a practical framework for robotic manipulation tasks, in which an agent is required to reach a certain goal defined by a function on the state space. However, the sparsity of such reward definition makes traditional reinforcement learning algorithms very inefficient. Hindsight Experience Replay (HER), a recent advance, has greatly improved sample efficiency and practical applicability for such problems. It exploits previous replays by constructing imaginary goals in a simple heuristic way, acting like an implicit curriculum to alleviate the challenge of sparse reward signal. In this paper, we introduce Hindsight Goal Generation (HGG), a novel algorithmic framework that generates valuable hindsight goals which are easy for an agent to achieve in the short term and are also potential for guiding the agent to reach the actual goal in the long term. We have extensively evaluated our goal generation algorithm on a number of robotic manipulation tasks and demonstrated substantially improvement over the original HER in terms of sample efficiency. <s> BIB003
Usually, RL is used to solve only one task and is not suited to learn multiple tasks. Typically, an agent is unable to generalize across different variants of a task, for instance if an agent learns to grasp a circular object, it will not be able to grasp a square object. One way to generalize DRL to multi-goal learning, or even to every available goal in the state space, is to use the universal value function approximator (UVFA) BIB001 . It should be noted that each state can serve as a target goal. Let's consider an agent moving in a closed maze where every position in the maze can be a goal. Assuming that there exists a vector space where a goal has a representation, UVFA integrates, by concatenating, the state goal representation with the observation of the agent. The found policy is then conditioned on the goal: π(s) becomes π(s, g) where g is a goal. It involves that if the goal space is well-constructed (as a state space for example), the agent can generalize its policy across the goal space. The same idea can be retrieved with contextual policy search [Fabisch and Metzen, 2014] . When the goal space is exactly a continuous state space, it is difficult to determine whether a goal is reached or not, since two continuous values are never exactly equals. Hindsight experience replay (HER) tackles this issue by providing a way to learn on multiple objectives with only one interaction. With their method, the agent can use an interaction done to accomplish one goal to learn on an other goal, by modifying the associated reward. Let's roll out an example, an agent does an action in the environment, resulting in an interaction (s, s , r g , a, g) where r g is the reward associated to the goal g. The agent can learn on this interaction, but can also use this interaction to learn other goals; to do so, it can change the goal into a new goal and recompute the reward, resulting in a new interaction (s, s , r g , a, g ). The only constraint for doing this is that reward function R(s, a, s , g ) has to be available. Typically, if an agent has a state as a goal and the reward function is 1 if it is into that state and 0 otherwise. At every interaction, it can change its true goal state for its current state and learn with a positive reward. Several strategies can be used to sample the new goals g BIB002 BIB003 . Even though complex strategies can improve the policy diversity and exploration through a curriculum, it is out of our scope since it is not an intrinsic motivation.
A survey on intrinsic motivation in reinforcement learning <s> Information theory <s> Neuroscientific and psychological data suggest a close link between affordance and mirror systems in the brain. However, we still lack a full understanding of both the individual systems and their interactions. Here, we propose that the architecture and functioning of the two systems is best understood in terms of two challenges faced by complex organisms, namely: (a) the need to select among multiple affordances and possible actions dependent on context and high-level goals and (b) the exploitation of the advantages deriving from a hierarchical organisation of behaviour based on actions and action-goals. We first review and analyse the psychological and neuroscientific literature on the mechanisms and processes organisms use to deal with these challenges. We then analyse existing computational models thereof. Finally we present the design of a computational framework that integrates the reviewed knowledge. The framework can be used both as a theoretical guidance to interpret empirical data and design new experiments, and to design computational models addressing specific problems debated in the literature. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Information theory <s> In this work, we take a representation learning perspective on hierarchical reinforcement learning, where the problem of learning lower layers in a hierarchy is transformed into the problem of learning trajectory-level generative models. We show that we can learn continuous latent representations of trajectories, which are effective in solving temporally extended and multi-stage problems. Our proposed model, SeCTAR, draws inspiration from variational autoencoders, and learns latent representations of trajectories. A key component of this method is to learn both a latent-conditioned policy and a latent-conditioned model which are consistent with each other. Given the same latent, the policy generates a trajectory which should match the trajectory predicted by the model. This model provides a built-in prediction mechanism, by predicting the outcome of closed loop policy behavior. We propose a novel algorithm for performing hierarchical RL with this model, combining model-based planning in the learned latent space with an unsupervised exploration objective. We show that our model is effective at reasoning over long horizons with sparse rewards for several simulated tasks, outperforming standard reinforcement learning methods and prior methods for hierarchical reasoning, model-based planning, and exploration. <s> BIB002
The Shannon entropy quantifies the mean necessary information to determine the value of a random variable. Let X be a random variable with a law of density p(X) satisfying the normalization and positivity requirements, we define its entropy by: In other words, it allows to quantify the disorder of a random variable. The entropy is maximal when X follows an uniform distribution, and minimal when p(X) is equal to zero everywhere except in one value, which is typically the case with a Dirac distribution. From this, we can also define the conditional entropy on a random variable S. It is similar to the classical entropy and quantifies the mean necessary information to find X knowing the value of an other random variable S: The mutual information allows to quantify the information contained in a random variable X about an other random variable Y . It can also be viewed as the decrease of disorder brought by a random variable Y on a random variable X. The mutual information is defined by: We can notice that the mutual information between two independent variables is zero (since H(X|Y ) = H(X)). Similarly to the conditional entropy, the conditional mutual information allows to quantify the information contained in a random variable about an other random variable, knowing the value of a third one. It can be written in various ways: We can see with equations (8) and (9) that the mutual information is symmetric and that it characterizes the decrease in entropy on X brought by Y (or inversely). Equation (10) defines the conditional mutual information as the difference between distribution P (Y, X|S) and the same distribution if Y and X were independent variables (the case where H(Y |X, S) = H(Y |S)). For further information on these notions, the interested reader should refer to BIB001 BIB002 .
A survey on intrinsic motivation in reinforcement learning <s> Empowerment <s> Is it better for you to own a corkscrew or not? If asked, you as a human being would likely say “yes”, but more importantly, you are somehow able to make this decision. You are able to decide this, even if your current acute problems or task do not include opening a wine bottle. Similarly, it is also unlikely that you evaluated several possible trajectories your life could take and looked at them with and without a corkscrew, and then measured your survival or reproductive fitness in each. When you, as a human cognitive agent, made this decision, you were likely relying on a behavioural “proxy”, an internal motivation that abstracts the problem of evaluating a decision impact on your overall life, but evaluating it in regard to some simple fitness function. One example would be the idea of curiosity, urging you to act so that your experience new sensations and learn about the environment. On average, this should lead to better and richer models of the world, which give you a better chance of reaching your ultimate goals of survival and reproduction. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Empowerment <s> Christoph Salge *, Cornelius Glackin and Daniel PolaniAdaptive Systems Research Group, University of Hertfordshire, College Lane,Hatfield AL10 9AB, UK; E-Mails: c.salge@herts.ac.uk; c.glackin2@herts.ac.uk; d.polani@herts.ac.uk* Author to whom correspondence should be addressed; E-Mail: c.salge@herts.ac.uk;Tel.:+44-1707-284490.Received: 28 February 2014; in revised form: 28 April 2014 / Accepted: 4 May 2014 /Published: 21 May 2014Abstract: One aspect of intelligence is the ability to restructure your own environment sothat the world you live in becomes more beneficial to you. In this paper we investigate howthe information-theoretic measure of agent empowerment can provide a task-independent,intrinsic motivation to restructure the world. We show how changes in embodiment andin the environment change the resulting behaviour of the agent and the artefacts left in theworld. For this purpose, we introduce an approximation of the established empowermentformalism based on sparse sampling, which is simpler and significantly faster to computefor deterministic dynamics. Sparse sampling also introduces a degree of randomness into thedecision making process, which turns out to beneficial for some cases. We then utilize themeasure to generate agent behaviour for different agent embodiments in a Minecraft-inspiredthree dimensional block world. The paradigmatic results demonstrate that empowerment canbe used as a suitable generic intrinsic motivation to not only generate actions in given staticenvironments, as shown in the past, but also to modify existing environmental conditions.In doing so, the emerging strategies to modify an agent’s environment turn out to bemeaningful to the specific agent capabilities, i.e., de facto to its embodiment.Keywords: empowerment; intrinsic motivation; information theory <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Empowerment <s> Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'. <s> BIB003
The empowerment has been developed to answer the following question: is there a local utility function which makes possible the survival of an organism BIB003 BIB001 ? This hypothetical function should be local in the sense that it does not modify the organism behavior on the very long term (death itself does not impact this function) and induced behaviors have to help species survival. Typically, this function can explain animal's will to dominate its pack, and more generally, the human's wish to acquire a social status, to earn more money or to be stronger, the need to maintain a high blood sugar level or the fear to be hurt BIB003 BIB002 . Each of these motivations widens the possibilities of action of the agent, and thereby its influence: a rich person will be able to do more things than a poor one. These motivations are local, in the sense that the reward is almost immediate. BIB003 named this ability to control the environment the empowerment of an agent. The empowerment is usually defined with information theory. BIB003 interpret the interaction loop as the sending of information into the environment: an action is a signal being sent while the observation is a received signal. The more informative the action about the next observations, the more the empowerment. Empowerment is measured as the capacity of a channel linking the actions and observations of the agent. Let a n t = (a t , a t+1 , ..., a t+n ) be the actions executed by the agent from time t to t + n, and s t+n the state of the environment at the time step t + n. The empowerment of state s t , noted Σ(s t ), is then defined as: I(a n t ; s t+n |s t ) = max p(a n t ) H(a n t |s t ) − H(a n t |s t+n , s t ). Maximizing the empowerment is the same as looking for the state in which the agent has the most control on the environment. Typically, the second term of equation 11 allows the agent to be sure of where he is going, whereas the first term emphasizes the diversity of reachable states. To get a large overview on the different ways to compute the empowerment, the reader can refer to BIB001 . Hereafter in this article, we will focus on the application of the empowerment in the context of RL, that is why we will not detail work using empowerment out of RL context (see e.g. , Guckelsberger et al. (2016 , BIB001 ). As presented in Section 2.5, an agent maximizing the empowerment tries to have the most control on the environment. Thus in RL, to maximize the empowerment, the agent is rewarded if it is heading towards areas where it controls its environment. The intrinsic reward function is then defined as: where ω(a|s) is the distribution choosing actions a n t . Ideally, ω(a|s) is the distribution maximizing equation 18 in accordance with equation 11. The problem is that p(a|s, s ) is hard to obtain because it requires p(s |a, s) which is intractable. Mohamed and Rezende (2015) propose to compute the empowerment by approximating the equation 18. To do this, they compute a lower bound of mutual information, used in many other work on empowerment: I(a; s |s) ≥ H(a|s) + E p(s |a,s)ω(a|s) log q ξ (a|s, s ). (19) The idea is to learn an approximator q ξ of the probability distribution p(a|s, s ) in a supervised way with maximum likelihood method by using data that the agent receives from the environment. This approach allows to generalize the computation of the empowerment in order to process continuous observations. In this work, experiments show that the maximization of the empowerment is particularly useful in dynamic environments, i.e. environments where the agent's state can change even if the executed action is stationary (e.g. the agent does not move). The classic example provided in Mohamed and Rezende (2015) is the prey-predator environment: the prey is the learner and tries to avoid to be caught as its death will cause a loss of control on the next states. Implicitly, the prey avoids to die by maximizing its empowerment. In contrast to a dynamic environment, a static environment has a static optimal policy (the agent stops moving when its finds the best state) making empowerment as an intrinsic reward less interesting according to a task. However, experiments proposed in Mohamed and Rezende (2015) use planning methods to estimate empowerment instead of interactions with the environment to collect data, which implies the use 1 High-dimensional 2 Low-dimensional 3 Neural network of a forward model. et al., 2016] tries to maximize empowerment with interactions with the environment using ω(a|s) = π(a|s). The intrinsic reward then becomes :
A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> (MATH) A locality sensitive hashing scheme is a distribution on a family $\F$ of hash functions operating on a collection of objects, such that for two objects x,y, PrheF[h(x) = h(y)] = sim(x,y), where sim(x,y) e [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure sim(A,B) = \frac{|A P B|}{|A P Ehe\F [d(h(P),h(Q))] x O(log n log log n). EMD(P, Q). . <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknown distribution. We model the distribution over MDPs using a hierarchical Bayesian infinite mixture model. For each novel MDP, we use the previously learned distribution as an informed prior for modelbased Bayesian reinforcement learning. The hierarchical Bayesian framework provides a strong prior that allows us to rapidly infer the characteristics of new environments based on previous environments, while the use of a nonparametric model allows us to quickly adapt to environments we have not encountered before. In addition, the use of infinite mixtures allows for the model to automatically learn the number of underlying MDP components. We evaluate our approach and show that it leads to significant speedups in convergence to an optimal policy after observing only a small number of tasks. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Statistical spoken dialogue systems have the attractive property of being able to be optimised from data via interactions with real users. However in the reinforcement learning paradigm the dialogue manager (agent) often requires significant time to explore the state-action space to learn to behave in a desirable manner. This is a critical issue when the system is trained on-line with real users where learning costs are expensive. Reward shaping is one promising technique for addressing these concerns. Here we examine three recurrent neural network (RNN) approaches for providing reward shaping information in addition to the primary (task-orientated) environmental feedback. These RNNs are trained on returns from dialogues generated by a simulated user and attempt to diffuse the overall evaluation of the dialogue back down to the turn level to guide the agent towards good behaviour faster. In both simulated and real user scenarios these RNNs are shown to increase policy learning speed. Importantly, they do not require prior knowledge of the user's goal. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Disclosed are methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training and joint optimization improve the data efficiency and policy returns of end-to-end reinforcement learning. <s> BIB008 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth. <s> BIB009 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging. We tackle this problem in the framework of options [Sutton, Precup & Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework. <s> BIB010 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> The recent advances in deep neural networks have led to effective vision-based reinforcement learning methods that have been employed to obtain human-level controllers in Atari 2600 games from pixel data. Atari 2600 games, however, do not resemble real-world tasks since they involve non-realistic 2D environments and the third-person perspective. Here, we propose a novel test-bed platform for reinforcement learning research from raw visual information which employs the first-person perspective in a semi-realistic 3D world. The software, called ViZDoom, is based on the classical first-person shooter video game, Doom. It allows developing bots that play the game using the screen buffer. ViZDoom is lightweight, fast, and highly customizable via a convenient mechanism of user scenarios. In the experimental part, we test the environment by trying to learn bots for two scenarios: a basic move-and-shoot task and a more complex maze-navigation problem. Using convolutional deep neural networks with Q-learning and experience replay, for both scenarios, we were able to train competent bots, which exhibit human-like behaviors. The results confirm the utility of ViZDoom as an AI research platform and imply that visual reinforcement learning in 3D realistic first-person perspective environments is feasible. <s> BIB011 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> We propose position-velocity encoders (PVEs) which learn---without supervision---to encode images to positions and velocities of task-relevant objects. PVEs encode a single image into a low-dimensional position state and compute the velocity state from finite differences in position. In contrast to autoencoders, position-velocity encoders are not trained by image reconstruction, but by making the position-velocity representation consistent with priors about interacting with the physical world. We applied PVEs to several simulated control tasks from pixels and achieved promising preliminary results. <s> BIB012 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Reinforcement learning is a powerful technique to train an agent to perform a task. However, an agent that is trained using reinforcement learning is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing. We use a generator network to propose tasks for the agent to try to achieve, specified as goal states. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment. Our method can also learn to achieve tasks with sparse rewards, which traditionally pose significant challenges. <s> BIB013 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> In recent years, research has shown that transfer learning methods can be leveraged to construct curricula that sequence a series of simpler tasks such that performance on a final target task is improved. A major limitation of existing approaches is that such curricula are handcrafted by humans that are typically domain experts. To address this limitation, we introduce a method to generate a curriculum based on task descriptors and a novel metric of transfer potential. Our method automatically generates a curriculum as a directed acyclic graph (as opposed to a linear sequence as done in existing work). Experiments in both discrete and continuous domains show that our method produces curricula that improve the agent’s learning performance when compared to the baseline condition of learning on the target task from scratch. <s> BIB014 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Most deep reinforcement learning techniques are unsuitable for robotics, as they require too much interaction time to learn useful, general control policies. This problem can be largely attributed to the fact that a state representation needs to be learned as a part of learning control policies, which can only be done through fitting expected returns based on observed rewards. While the reward function provides information on the desirability of the state of the world, it does not necessarily provide information on how to distill a good, general representation of that state from the sensory observations. State representation learning objectives can be used to help learn such a representation. While many of these objectives have been proposed, they are typically not directly combined with reinforcement learning algorithms. We investigate several methods for integrating state representation learning into reinforcement learning. In these methods, the state representation learning objectives help regularize the state representation during the reinforcement learning, and the reinforcement learning itself is viewed as a crucial state representation learning objective and allowed to help shape the representation. Using autonomous racing tests in the TORCS simulator, we show how the integrated methods quickly learn policies that generalize to new environments much better than deep reinforcement learning without state representation learning. <s> BIB015 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> We learn end-to-end point-to-point and path-following navigation behaviors that avoid moving obstacles. These policies receive noisy lidar observations and output robot linear and angular velocities. The policies are trained in small, static environments with AutoRL, an evolutionary automation layer around reinforcement learning (RL) that searches for a deep RL reward and neural network architecture with large-scale hyper-parameter optimization. AutoRL first finds a reward that maximizes task completion and then finds a neural network architecture that maximizes the cumulative of the found reward. Empirical evaluations, both in simulation and on-robot, show that AutoRL policies do not suffer from the catastrophic forgetfulness that plagues many other deep reinforcement learning algorithms, generalize to new environments and moving obstacles, are robust to sensor, actuator, and localization noise, and can serve as robust building blocks for larger navigation tasks. Our path-following and point-to-point policies are, respectively, 23% and 26% more successful than comparison methods across new environments. <s> BIB016 </s> A survey on intrinsic motivation in reinforcement learning <s> RL problematic <s> Scaling end-to-end reinforcement learning to control real robots from vision presents a series of challenges, in particular in terms of sample efficiency. Against end-to-end learning, state representation learning can help learn a compact, efficient and relevant representation of states that speeds up policy learning, reducing the number of samples needed, and that is easier to interpret. We evaluate several state representation learning methods on goal based robotics tasks and propose a new unsupervised model that stacks representations and combines strengths of several of these approaches. This method encodes all the relevant features, performs on par or better than end-to-end learning, and is robust to hyper-parameters change. <s> BIB017
We identified four challenges in DRL where intrinsic motivation provides a suitable solution. In this section, we explain the importance of these challenges and the current limitations of DRL. Sparse rewards. Classic RL algorithms operate in environments where the rewards are dense, i.e. the agent receives a reward after almost every completed action. In this kind of environment, naive exploration policies such as -greedy BIB004 ] or the addition of a gaussian noise on the action BIB007 are effective. More elaborated methods can also be used to promote exploration, such as Boltzmann exploration BIB002 , an exploration in the parameter-space Rückstiess et al., 2010; or bayesian RL . In environments with sparse rewards, the agent receives a reward signal only after he executed a large sequence of specific actions. The game Montezuma's revenge is a benchmark illustrating a typical sparse reward function. In this game, an agent has to move between different rooms while picking up objects (it can be keys to open doors, torches, ...). The agent receives a reward only when it finds objects or when it reaches the exit of the room. Such environments with sparse rewards are almost impossible to solve with the above mentioned exploration policies since the agent does not have local indications on the way to improve its policy. Thus the agent never finds rewards and cannot learn a good policy with respect to the task . Rather than working on an exploration policy, it is common to shape an intermediary dense reward function which adds to the reward associated to the task in order to make the learning process easier for the agent BIB004 . However, the building of a reward function often reveals several unexpected errors and most of the time requires expert knowledge. For example, it may be difficult to shape a local reward for navigation tasks. Indeed, you would have to be able to compute the shortest path between the agent and its goal, which is the same as solving the navigation problem. On the other side, the automation of the shaping of the local reward (without calling on an expert) requires too high computational resources BIB016 . Building a good state representation. What is a good state representation? BIB005 argue that, in standard RL, this representation must be markovian, able to represent the true value of the policy, generalize well and low-dimensional. Using an adapted feature space to learn a task can considerably accelerate the learning process BIB017 BIB015 and may even help to learn a forward model. The best way to do this may be to construct a minimal feature space with independent features, e.g. one feature for each object [Lesort et al., 2018] . In order to better understand the importance of a relevant state representation in RL, let's consider a simple navigation task where the agent has to reach a target area. If the agent accesses pixels input in a top view, it will have to extract its own position and the target position through complex non-linear transformations to understand which directions it has to take. At the opposite, if it has already access to its position, it will only have to check if its vertical and horizontal positions are greater, equals or smaller than those of the target. In standard RL, this problem is exacerbated, firstly because the only available learning process is the back-propagation of the reward signal, and secondly by the presence of noise in the raw state. It results that if the reward is sparse, the agent will not learn anything from its interactions even though interaction by themselves are rich in information. Furthermore, the state representation fully depends on the task and cannot be generalized to other tasks. Several work concerns the learning of a relevant state representation. Auxiliary losses can complement the reward with supervised learning losses, it relies on information such as immediate reward or other predefined functions BIB008 BIB009 . The agent may also use some prior knowledge on transitions BIB006 BIB012 or learn inverse models . There is a large literature on the best way to quickly build this kind of state space, we invite the interested reader to look at [Lesort et al., 2018] for a general review. However, it is still difficult to get an entire disentangled representation of controllable objects since it requires interactions with the environment which do not always depend on the task. Temporal abstraction of actions. Temporal abstraction of actions consists in using high-level actions, also called options, which can have different execution time BIB001 . Each option is associated with an intra-option policy which defines the action (low-level actions or other options) to realize in each state when the option is executed. The length of an option, which is the number of executed actions when an option is chosen, is often fixed. An inter-option policy can be in charge of choosing the options to accomplish. Abstract actions are a key element to accelerate the learning process since the number of decisions to take is significantly reduced if options are used. It also makes easier the credit assignment problem BIB004 ]. This problem refers to the fact that rewards can occur with a temporal delay and will only very weakly affect all temporally distant states that have preceded it, although these states may be important to obtain that reward. Indeed, the agent must propagate the reward along the entire sequence of actions (through equation 4) to reinforce the first involved state-action tuple. This process can be very slow when the action sequence is large. This problem also concerns determining which action is decisive for getting the reward. For example, let's assume that a robot is trying to reach a cake on a table which is far from the robot. If the robot has an option get to the table and follows it, the robot will then only have to take the cake to be rewarded. Then it will be easy to associate the acquisition of the cake (the reward) to the option get to the table. In contrast, if the robot has to learn to handle each of its joints (low-level or primitives actions), it will be difficult to determine which action is responsible of the acquisition of the cake, among all executed actions. Furthermore, using options can make exploration easier when the rewards are sparse. To illustrate this, let's assume that the agent has access to the option get the key in Montezuma's revenge. The problem becomes trivial since only one exploration action can lead to the reward, yet it would require without options an entire sequence of specific low-level actions. This problem arises from the minimal number of actions needed to get a reward. Regarding the intra-option policy, it can be manually defined BIB001 , but it requires some extra expert knowledge. It can also be learnt with the reward function BIB010 , but then, options are not reusable for other tasks and are helpless for the exploration problem. Building a curriculum. Curriculum learning commonly takes place in the framework of multi-task reinforcement learning BIB003 where one agent tries to solve several tasks. This is about defining a schedule in the learning process. It comes from the observation that learning is much easier when examples or tasks are organized in a meaningful order . Typically, a curriculum could organize tasks in such a way that they are increasingly complex and fit into each other. For example, an helpful curriculum may be to first learn to a robot how to grasp a cube and only then how to move the cube; this way, the robot can take advantage of its ability to grasp a cube to move it. Without any prior knowledge, a robot would probably never succeed in grasping and moving a cube since it requires a large sequence of actions (if the robot handles its joints). Standard methods rely on pre-specified tasks sequences as a curriculum BIB011 , or expert score which acts as a baseline score [Sharma and Ravindran, 2017] . Some other methods require strong assumptions BIB013 , rely on task decomposition or availability of source tasks BIB014 . It appears that most of the time, in standard methods, curriculum learning requires an expert in one way or another. Summary. In summary, several issues in RL are entirely or partially unsolved: Exploration: The agent never reaches a reward signal in case of sparse rewards. State representation: The agent does not manage to learn a representation of its observations with independent features or meaningful distance metrics. Building option: The agent is unable to learn abstract high-level decisions independently of the task.
A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> Humans and other animals often engage in activities for their own sakes rather than as steps toward solving practical problems. Psychologists call these intrinsically motivated behaviors. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy. At the core of the model are recent theoretical and algorithmic advances in computational reinforcement learning, specifically, new concepts related to skills and new learning algorithms for learning with skill hierarchies. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> This paper establishes a link between the challenge of solving highly ambitious problems in machine learning and the goal of reproducing the dynamics of open-ended evolution in artificial life. A major problem with the objective function in machine learning is that through deception it may actually prevent the objective from being reached. In a similar way, selection in evolution may sometimes act to discourage increasing complexity. This paper proposes a single idea that both overcomes the obstacle of deception and suggests a simple new approach to open-ended evolution: Instead of either explicitly seeking an objective or modeling a domain to capture the open-endedness of natural evolution, the idea is to simply search for novelty. Even in an objective-based problem, such novelty search ignores the objective and searches for behavioral novelty. Yet because many points in the search space collapse to the same point in behavior space, it turns out that the search for novelty is computationally feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. In fact, on the way up the ladder of complexity, the search is likely to encounter at least one solution. In this way, by decoupling the idea of open-ended search from only artificial life worlds, the raw search for novelty can be applied to real world problems. Counterintuitively, in the deceptive maze navigation task in this paper, novelty search significantly outperforms objective-based search, suggesting a surprising new approach to machine learning. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> There is great interest in building intrinsic motivation into artificial systems using the reinforcement learning framework. Yet, what intrinsic motivation may mean computationally, and how it may differ from extrinsic motivation, remains a murky and controversial subject. In this paper, we adopt an evolutionary perspective and define a new optimal reward framework that captures the pressure to design good primary reward functions that lead to evolutionary success across environments. The results of two computational experiments show that optimal primary reward signals may yield both emergent intrinsic and extrinsic motivation. The evolutionary perspective and the associated optimal reward framework thus lead to the conclusion that there are no hard and fast features distinguishing intrinsic and extrinsic reward computationally. Rather, the directness of the relationship between rewarding behavior and evolutionary success varies along a continuum. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> Statistical spoken dialogue systems have the attractive property of being able to be optimised from data via interactions with real users. However in the reinforcement learning paradigm the dialogue manager (agent) often requires significant time to explore the state-action space to learn to behave in a desirable manner. This is a critical issue when the system is trained on-line with real users where learning costs are expensive. Reward shaping is one promising technique for addressing these concerns. Here we examine three recurrent neural network (RNN) approaches for providing reward shaping information in addition to the primary (task-orientated) environmental feedback. These RNNs are trained on returns from dialogues generated by a simulated user and attempt to diffuse the overall evaluation of the dialogue back down to the turn level to guide the agent towards good behaviour faster. In both simulated and real user scenarios these RNNs are shown to increase policy learning speed. Importantly, they do not require prior knowledge of the user's goal. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> Goal-conditioned policies are used in order to break down complex reinforcement learning (RL) problems by using subgoals, which can be defined either in state space or in a latent feature space. This can increase the efficiency of learning by using a curriculum, and also enables simultaneous learning and generalization across goals. A crucial requirement of goal-conditioned policies is to be able to determine whether the goal has been achieved. Having a notion of distance to a goal is thus a crucial component of this approach. However, it is not straightforward to come up with an appropriate distance, and in some tasks, the goal space may not even be known a priori. In this work we learn a distance-to-goal estimate which is computed in terms of the number of actions that would need to be carried out in a self-supervised approach. Our method solves complex tasks without prior domain knowledge in the online setting in three different scenarios in the context of goal-conditioned policies a) the goal space is the same as the state space b) the goal space is given but an appropriate distance is unknown and c) the state space is accessible, but only a subset of the state space represents desired goals, and this subset is known a priori. We also propose a goal-generation mechanism as a secondary contribution. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> A new model of RL with intrinsic rewards <s> Robots must know how to be gentle when they need to interact with fragile objects, or when the robot itself is prone to wear and tear. We propose an approach that enables deep reinforcement learning to train policies that are gentle, both during exploration and task execution. In a reward-based learning environment, a natural approach involves augmenting the (task) reward with a penalty for non-gentleness, which can be defined as excessive impact force. However, augmenting with only this penalty impairs learning: policies get stuck in a local optimum which avoids all contact with the environment. Prior research has shown that combining auxiliary tasks or intrinsic rewards can be beneficial for stabilizing and accelerating learning in sparse-reward domains, and indeed we find that introducing a surprise-based intrinsic reward does avoid the no-contact failure case. However, we show that a simple dynamics-based surprise is not as effective as penalty-based surprise. Penalty-based surprise, based on predicting forceful contacts, has a further benefit: it encourages exploration which is contact-rich yet gentle. We demonstrate the effectiveness of the approach using a complex, tendon-powered robot hand with tactile sensors. Videos are available at this http URL. <s> BIB008
Reinforcement learning is derived from behaviorism and uses extrinsic rewards BIB004 ]. However BIB003 and BIB001 reformulated the RL framework to incorporate intrinsic motivation. Rather than considering the MDP environment as the environment in which the agent must realize its task, they suggest that the MDP environment can be formed of two parts: the external part corresponds to the task environment of the agent; the internal part is internal to the agent and computes the MDP states and the total signal reward though previous interactions. Consequently, we can consider an intrinsic reward as a reward received from the MDP environment. The MDP state is no more the external state but an internal state of the agent; it then contradicts what was previously thought as being a limitation of RL . Figure 1 summarizes the new framework: the critic is the internal part which computes the intrinsic reward and deals with the credit assignment. The state includes sensations and potentially the historic of agent's interactions. The decision can be a high-level decision translated into low-level actions. According to BIB003 , evolution provides, a general intrinsic reward function which maximizes a fitness function. We think that such intrinsic motivation can be a meta-skill facilitating the learning of other behaviors. Curiosity, for instance, does not immediately produce selective advantages but enables the acquisition of skills providing by them-selves some selective advantages. More widely, the use of intrinsic motivation enables to obtain intelligent behaviors which can serve goals more efficiently than with only standard reinforcement BIB002 ] (see Section 4). In practice, there are multiple ways to integrate an intrinsic reward in a RL framework. The main approach is to compute the agent's reward r as a weighted sum of an intrinsic reward r int and the extrinsic reward r ext : r = αr int + βr ext BIB006 BIB005 BIB007 BIB008 . In this version, we can think of the intrinsic reward as an intrinsic bonus. When the extrinsic value function is important to compute the intrinsic reward or when the hyper-parameters have to be different, the sum can be made on the value function level, i.e. V (s) = αV int (s) + βV ext (s) . Another possibility is to consider the option framework. Let us notice that since the reward function with intrinsic rewards evolves over time, the agent generally cannot find an optimal stationary policy.
A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> (MATH) A locality sensitive hashing scheme is a distribution on a family $\F$ of hash functions operating on a collection of objects, such that for two objects x,y, PrheF[h(x) = h(y)] = sim(x,y), where sim(x,y) e [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure sim(A,B) = \frac{|A P B|}{|A P Ehe\F [d(h(P),h(Q))] x O(log n log log n). EMD(P, Q). . <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> The concept of surprise is central to sensory processing, adaptation, learning, and attention. Yet, no widely-accepted mathematical theory currently exists to quantitatively characterize surprise elicited by a stimulus or event, for observers that range from single neurons to complex natural or engineered systems. We describe a formal Bayesian definition of surprise that is the only consistent formulation under minimal axiomatic assumptions. Surprise quantifies how data affects a natural or artificial observer, by measuring the difference between posterior and prior beliefs of the observer. Using this framework we measure the extent to which humans direct their gaze towards surprising items while watching television and video games. We find that subjects are strongly attracted towards surprising locations, with 72% of all human gaze shifts directed towards locations more surprising than the average, a figure which rises to 84% when considering only gaze targets simultaneously selected by all subjects. The resulting theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Formal exploration approaches in model-based reinforcement learning estimate the accuracy of the currently learned model without consideration of the empirical prediction error. For example, PAC-MDP approaches such as R-MAX base their model certainty on the amount of collected data, while Bayesian approaches assume a prior over the transition dynamics. We propose extensions to such approaches which drive exploration solely based on empirical estimates of the learner's accuracy and learning progress. We provide a "sanity check" theoretical analysis, discussing the behavior of our extensions in the standard stationary finite state-action case. We then provide experimental studies demonstrating the robustness of these exploration measures in cases of non-stationary environments or where original approaches are misled by wrong domain assumptions. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011) continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [53,54]. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Context Tree Weighting is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark. <s> BIB008 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements. <s> BIB009 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. ::: Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. ::: ::: In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. ::: ::: Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration. <s> BIB010 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use sequential density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary sequential density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge. <s> BIB011 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'. <s> BIB012 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards. <s> BIB013 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems. <s> BIB014 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> We present Project Malmo - an AI experimentation platform built on top of the popular computer game Minecraft, and designed to support fundamental research in artificial intelligence. As the AI research community pushes for artificial general intelligence (AGI), experimentation platforms are needed that support the development of flexible agents that learn to solve diverse tasks in complex environments. Minecraft is an ideal foundation for such a platform, as it exposes agents to complex 3D worlds, coupled with infinitely varied game-play. ::: ::: Project Malmo provides a sophisticated abstraction layer on top of Minecraft that supports a wide range of experimentation scenarios, ranging from navigation and survival to collaboration and problem solving tasks. In this demo we present the Malmo platform and its capabilities. The platform is publicly released as open source software at IJCAI, to support openness and collaboration in AI research. <s> BIB015 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL <s> BIB016 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as $\epsilon$-greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the $k$-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques. <s> BIB017 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo. <s> BIB018 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/ <s> BIB019 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level. <s> BIB020 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> We explore methods for option discovery based on variational inference and make two algorithmic contributions. First: we highlight a tight connection between variational option discovery methods and variational autoencoders, and introduce Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection. In VALOR, the policy encodes contexts from a noise distribution into trajectories, and the decoder recovers the contexts from the complete trajectories. Second: we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution. Finally, we investigate other topics related to variational option discovery, including fundamental limitations of the general approach and the applicability of learned options to downstream tasks. <s> BIB021 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. <s> BIB022 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> Goal-conditioned policies are used in order to break down complex reinforcement learning (RL) problems by using subgoals, which can be defined either in state space or in a latent feature space. This can increase the efficiency of learning by using a curriculum, and also enables simultaneous learning and generalization across goals. A crucial requirement of goal-conditioned policies is to be able to determine whether the goal has been achieved. Having a notion of distance to a goal is thus a crucial component of this approach. However, it is not straightforward to come up with an appropriate distance, and in some tasks, the goal space may not even be known a priori. In this work we learn a distance-to-goal estimate which is computed in terms of the number of actions that would need to be carried out in a self-supervised approach. Our method solves complex tasks without prior domain knowledge in the online setting in three different scenarios in the context of goal-conditioned policies a) the goal space is the same as the state space b) the goal space is given but an appropriate distance is unknown and c) the state space is accessible, but only a subset of the state space represents desired goals, and this subset is known a priori. We also propose a goal-generation mechanism as a secondary contribution. <s> BIB023 </s> A survey on intrinsic motivation in reinforcement learning <s> Exploration <s> This paper provides an empirical evaluation of recently developed exploration algorithms within the Arcade Learning Environment (ALE). We study the use of different reward bonuses that incentives exploration in reinforcement learning. We do so by fixing the learning algorithm used and focusing only on the impact of the different exploration bonuses in the agent's performance. We use Rainbow, the state-of-the-art algorithm for value-based agents, and focus on some of the bonuses proposed in the last few years. We consider the impact these algorithms have on performance within the popular game Montezuma's Revenge which has gathered a lot of interest from the exploration community, across the the set of seven games identified by Bellemare et al. (2016) as challenging for exploration, and easier games where exploration is not an issue. We find that, in our setting, recently developed bonuses do not provide significantly improved performance on Montezuma's Revenge or hard exploration games. We also find that existing bonus-based methods may negatively impact performance on games in which exploration is not an issue and may even perform worse than $\epsilon$-greedy exploration. <s> BIB024
This subsection describes the three main methods tackling the exploration problem. The first uses error prediction, the second evaluates state novelty and the third is based on information gain. In each case, the intrinsic motivation completes an exploration policy. Prediction error. The idea is here to lead the agent towards areas where the prediction of the state following a state-action tuple is difficult. We can formalize this intrinsic reward by the prediction error on the next state, computed as the distance between predicted and real next state: where g is a generic function (e.g. identity or a learnt one) encoding the state space into a feature space andF is a model of the environmental dynamics.In the following, we consider thatF is a neural network that learns a forward model predicting the next encoding state given the current encoding state and action. We will see that learning a relevant function g is here the main challenge. Dynamic Auto-Encoder (Dynamic-AE) BIB008 computes the distance between predicted and real state in a state space compressed with an auto-encoder . g is then the encoding part of the auto-encoder. This distance serves as an intrinsic reward. However this approach is unable to handle local stochasticity of the environment BIB019 . For example, it turns out that adding random noise in a 3D environment attracts the agent; it will passively watch the noise since it will be unable to predict the next observation, which can never happen. This problem is also called the white-noise problem BIB016 BIB006 . A potential solution would be to make sure that transitions can be learnt, i.e. that transitions are not too stochastic, but this problem is difficult to solve in practice BIB005 . The intrinsic curiosity module (ICM) BIB016 learns environment dynamics in a feature space. It first builds a state representation by learning an inverse model, that uses current and next states to predict the action done between them. Thus the function g constrains the representation to things that can be controlled by the agent. Secondly, ICM predicts in the feature space computed by g the next state given the action and the current state (forward modelF ). The prediction error does not incorporate the white-noise that does not depend on actions, so it will not be represented in the feature state space. ICM notably allows the agent to explore its environment in the games VizDoom et Super Mario Bros. In Super Mario Bros, the agent crosses 30% of the first level without extrinsic reward. However one major drawback is the incapacity of the agent to keep what depends on his long-term control, for example, it may perceive the consequences of its action several steps later. Still considering the prediction error as an intrinsic bonus, BIB019 propose a summary of the different ways of defining the feature space g. They show, on one side, that using random features can be competitive with ICM method but it could hardly be generalized to environment changes, and on the other side, that using the raw state space (e.g. pixels) is ineffective. AR4E [Oh and Cavallaro, 2019] reuses the ICM module, but encodes the action in a large state space before concatenating it into the current state. This trick seems to improve ICM, but it lacks an analysis explaining the obtained results. EMI entirely transfers the complexity of learning a forward model into the learning of a space and action representation. Then the forward model is constrained to be a simple linear model in the representation space with an additional module which computes the intrinsic non linear error of the dynamics (for example a screen change). The different spaces are computed by maximizing I([s, a]; s ) and I([s, s ]; a) with the variational divergence lower bound of the mutual information . In other words, g is constructed in order to make a simple linear model efficient as a dynamic model. The intrinsic reward is then the norm of the error of the forward model. EMI outperforms previous work on Atari with a quick timescale and proves to be able to construct an embedding space related to positions. But it does not tackle the white noise problem. State novelty. There is a large literature on the measure of the state's novelty as intrinsic motivation. At the beginning, the intuition was to add an intrinsic bonus when the agent goes into a state in which it usually never goes [Brafman and Tennenholtz, 2002; Kearns and Singh, 2002] . These methods are said to be count-based. As the agent visits a state, the intrinsic reward associated with this state decrease. It can be formalized with: where N (s t ) is the number of times that the state has been visited. Although this method is efficient in a tabular environment (with a discrete state space), it is hardly applicable when states are numerous or continuous since an agent never really returns in the same state. A first solution proposed by BIB010 , called TRPO-AE-hash, is to hash the state space using SimHash BIB001 when it is too large. However these results are only slightly better than those obtained with a classic exploration policy. Other attempts of adaptation to a very large state space have been proposed, like DDQN-PC BIB011 , A3C+ BIB011 or DQN-PixelCNN BIB002 , which rely on density models BIB023 BIB007 . Density models allow to compute the pseudo-count BIB011 , which is an adaptation of the counting enabling its generalization from a state towards neighbourhood states. The intrinsic reward is then: whereN (s t ) is the pseudo-count defined as: with ρ(s) the density model which outputs a probability of observing s, and ρ (s) the probability to observe s after one more pass on s. Although the algorithms based on density models work on environments with sparse rewards, density models add an important complexity layer BIB002 . In order to decrease this computational complexity, φ-EB avoids modelling the density on the raw state space, but on a feature space induced by the computation of V (s). The results are quiet impressive on Montezuma's revenge considering the cutback in the computational cost. The latent space can also be computed with a variational auto-encoder BIB023 . More indirectly, DQN+SR BIB005 uses the norm of the successor representation BIB012 as intrinsic reward. To justify this choice, the authors explain that this bonus is correlated to the counting. Cleverly, DORA the explorer uses an other MDP which contains no rewards. The value of a state in this MDP is biased optimistically, in such a way that it decreases as the agent updates it. The computed value is used as an approximation of the count. The approach is natural in a continuous space, but it lacks some experiments comparing it with existing approaches . Finally, RND BIB020 assesses state novelty by distilling a random neural network (with fix weights) into an other neural network. For every states, the random network produces random features which are continuous. The second network learns to reproduce the output of the random network for each state. The prediction error is the reward. This amounts to reward state novelty since the error will be high when the second network has still not visited many times the concerned state, and the error will be low after it learned a lot on it. However the agent does not manage to learn long-term exploration. For example, in Montezuma's revenge, the agent uses its keys to open first the doors it sees, but it does not manage to access to the two last doors. In addition, RND has the highest score on Montezuma's revenge, but with a significantly larger number of steps (see Table 3 ). Lastly, random features can be insufficient to represent the wealth of an environment. Novelty as discrepancy towards other states. An other way to evaluate the state novelty is to estimate it as the distance between a state and states usually covered. With D as a distance function and B as a distribution of states among a moving buffer, we can describe this kind of rewards as : Informed exploration ] uses a forward model to predict which action will bring the agent in the most different states compared to its d last visited states. THE Authors use a gaussian kernel. However they do not use this distance as an intrinsic reward but as a way to choose the action instead of -greedy strategy. It would be interesting to evaluate it as an intrinsic reward. EX 2 learns a discriminator to differentiate states from each other: when the discriminator does not manage to differentiate the current state from those in the buffer, it means the agent has not enough visited this state and it will be rewarded, and inversely if it is able to make the differentiation. CB mixes up prediction error and state novelty. It gets inspiration from the deep variational information bottleneck BIB021 : it computes a latent state space by maximizing the mutual information between the state value and this latent space, with a latent distribution as entropic as possible. The intrinsic reward for a state is then the KL-divergence between a fixed diagonal Gaussian prior and posterior of the distribution of latent variables. It results that, as long as the agent does not find any reward, it will look for rare states which have a distribution in the latent space different from the prior. When the agent finds the reward, the latent distribution will be different from the prior and the intrinsic reward will guide the agent towards interesting areas. While this approach seems interesting to avoid distractors and provides good results on Gravitar and Solaris, it requires an extinsic reward to avoid stochasticity. A similar KL-divergence intrinsic reward can be found in VSIMR [Klissarov et al., ] , but with a standard variational auto-encoder (VAE) acting as an auto-encoder. The episodic curiosity module (ECO) BIB022 deepens this idea by taking inspiration from episodic memory. The proposed model contains a comparison module (trained with a siamese architecture ) able to give a bonus if the agent is far from the states contained in a buffer. Therefore, it computes the probability that the number of necessary actions to go to a selected state (in a buffer) from the current state is below a threshold. By storing sparse states into a buffer, the agent sets reference points in the environment and tries to get away from them, like a partitioning of the environment. The probability that the agent is away of every buffers is used as an intrinsic reward. This model has been applied on 3D environments like DMLab or VizDoom [Kempka et al., 2016] and enables an agent to explore the overall environment. However, to compute the intrinsic reward, the agent has to compare its current observation to each memorized state. Scaling up this method may then be difficult when the state space is rich since it will require more states to efficiently partition the state space. On the other side, this method does not suffer from the white-noise problem (cf. §4.1). State marginal matching (SMM) is a method closed to pseudo-count, but computes the KL-divergence between state distribution induced by the policy and a target distribution. In fact, when the target distribution is the uniform one, the agent strives to maximize the state entropy. This objective is also combined with the discriminative policy objective explained in section 5.1, which induces a distribution of trajectories. Their results are interesting on navigation tasks, but they are not compared to usual exploration benchmarks. Among methods based on the computation of state novelty, Stanton and Clune (2018) distinguish inter-episodes novelty, used by A3C+ BIB011 , and intraepisodes novelty, which we can find in ECO BIB022 and informed exploration . Typically, intra-episodes novelty will reset the state count at the beginning of each episode. It could be a way to overcome RND BIB020 issue to handle long-term exploration. Information gain. The information gain is a reward based on the reduction of uncertainty on environment's dynamics BIB002 Little and Sommer, 2013] , which can also be assimilated to learning progress BIB002 Schmidhuber, 1991; or bayesian surprise BIB004 BIB003 . This allows, on one side, to push the agent towards areas it does not know, and on the other side to prevent attraction towards stochastic areas. Indeed, if the area is deterministic, environment's transitions are predictable and the uncertainty about dynamics can decrease. At the opposite, if transitions are stochastic, the agent turns out to be unable to predict transitions and does not reduce uncertainty. If θ is the parameter set of a dynamic parametric model and U refers to uncertainty, this can be defined as: Exploration strategy VIME BIB013 formalizes learning progress in a bayesian way. The interest of bayesian approaches is to be able to measure the uncertainty on the learned model BIB009 . Thus, the agent approximates these dynamics with a bayesian neural network BIB014 , and computes the reward as the uncertainty reduction on weights. In other words, the agent tries to do actions which are informative on dynamics. In a similar way, BIB017 replace the bayesian model by a classic neural network followed by a factorized gaussian probability distribution. Two rewards are evaluated: the first one (NLL) uses as intrinsic bonus the cross entropy of the prediction, and the second one (AKL) the improvement of the prediction between the time t and after k improvements at t + k. Although these methods are simpler than VIME, their benefit in terms of performance is mitigated. More creatively, train several (generally 5) forward models in a feature space and estimate their mean predictions. The more the models are trained on a state-action tuple, the more they will converge to the expectation of value of the features of the next state. The intrinsic reward is then the variance of the ensemble of predictions. The benefits are that the variance is high when forward models are not learned, but low when the noise comes from the environment since all the models will converge to the mean value. It appears that this method performs similarly to state of the art approaches BIB019 and handles the white-noise effect. However the main intrinsic issue is computational since it requires multiple forward models to train. A similar idea can be found with MAX BIB018 , but using the Jensen-Shannon Divergence between distributions of stochastic forward models instead of the variance across outputs of models. Conclusion. To conclude, the exploration problem is probably the largest use case for intrinsic motivation. We provide a synthesis of our analysis in Table 3 . A complementary benchmark can be found in BIB024 . There are multiple distinct heads: most count-based approaches are adapted for fully-observable MDPs, like Montezuma's revenge; error prediction is relatively simple but relies on a good state representation; information gain methods are particularly adequate to prevent the whitenoise problem. In fact, before choosing the right exploration method, it is important to consider the tradeoff between computational cost and efficiency. On simple environments, simple methods can perform well. So far, the more complex tested environment is Montezuma's environment, however it might be necessary to consider larger/infinite environments like Minecraft BIB015 to wisely advice and compare these methods. Indeed, it would be important to know how count-based methods BIB002 or EC BIB022 scale to these kind of environments. Furthermore, to our knowledge, few works tried to adapt these exploration processes to a multi-agent scenario, which is known to have an exponentially larger state space [Oliehoek, 2012] . Among them, Iqbal and Sha (2019) introduce different ways to guide the exploration process, but only consider very simple tabular environments.
A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as $\epsilon$-greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the $k$-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/ <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> We propose a unified mechanism for achieving coordination and communication in Multi-Agent Reinforcement Learning (MARL), through rewarding agents for having causal influence over other agents' actions. Causal influence is assessed using counterfactual reasoning. At each timestep, an agent simulates alternate actions that it could have taken, and computes their effect on the behavior of other agents. Actions that lead to bigger changes in other agents' behavior are considered influential and are rewarded. We show that this is equivalent to rewarding agents for having high mutual information between their actions. Empirical results demonstrate that influence leads to enhanced coordination and communication in challenging social dilemma environments, dramatically increasing the learning curves of the deep RL agents, and leading to more meaningful learned communication protocols. The influence rewards for all agents can be computed in a decentralized way by enabling agents to learn a model of other agents using deep neural networks. In contrast, key previous works on emergent communication in the MARL setting were unable to learn diverse policies in a decentralized manner and had to resort to centralized training. Consequently, the influence reward opens up a window of new opportunities for research in this area. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> VIC [Gregor <s> Efficient exploration is a long-standing problem in sensorimotor learning. Major advances have been demonstrated in noise-free, non-stochastic domains such as video games and simulation. However, most of these formulations either get stuck in environments with stochastic dynamics or are too inefficient to be scalable to real robotics setups. In this paper, we propose a formulation for exploration inspired by the work in active learning literature. Specifically, we train an ensemble of dynamics models and incentivize the agent to explore such that the disagreement of those ensembles is maximized. This allows the agent to learn skills by exploring in a self-supervised manner without any external reward. Notably, we further leverage the disagreement objective to optimize the agent's policy in a differentiable manner, without using reinforcement learning, which results in a sample-efficient exploration. We demonstrate the efficacy of this formulation across a variety of benchmark environments including stochastic-Atari, Mujoco and Unity. Finally, we implement our differentiable exploration on a real robot which learns to interact with objects completely from scratch. Project videos and code are at this https URL <s> BIB008
where h is the observation history (including current observation and action). The experiments on diverse environments show that learned trajectories lead to diverse areas and that a pretraining using empowerment helps to learn a task. However, learned tasks are still relatively simple. Mega-reward formalizes differently empowerment; instead of directly using mutual information, it cuts out the pixel space into a matrix which defines the probability of control of the corresponded part of the image. The intrinsic reward is then a matrix sum. They also show that the matrix can act as a mask to hide uncontrollable features, what other intrinsic explorations methods BIB004 can benefit from to reduce the white-noise problem in a long-term way (as opposite to ICM method which detects short-term controllable features). However the method is inherently linked to pixel state environments. Chuck et al. (2019) provide a specific architecture relying on multiple assumptions such as the fact that an object can not spontaneously change its direction or its proximity to objects its interacts with. The agent formulates hypothesis on the controllability of objects, which it tries to verify through a specific policy rewarded with an intrinsic verification process. Checked hypothesis can then be used directly as skills. Empowerment may also be interesting in multi-agents RL. Multi-agents RL is similar to mono-agent RL except that several agents learn simultaneously to solve a task and have to coordinate with each other. BIB007 show that in a non-cooperative game, as social dilemma , an empowerment-based intrinsic reward could stabilize the learning process. In fact, it compensates the decrease of individual reward caused by a policy maximizing the long-term reward of all the agents. Conclusion. Empowerment is an interesting method to avoid an extrinsic reward and keep various complex behaviors. The main difficulty using empowerment in RL is its computation. Several approaches use an environment model to compute the reward based on empowerment [Mohamed and Rezende, 2015; de Abril and Kanai, 2018] . However the very essence of RL is that the agent does not know a priori environment dynamics or the reward function. Existing work in this context remains relatively limited and is not sufficient to demonstrate the potential of empowerment to help the learning process. It is interesting to note that empowerment can push an agent to learn behaviors even in a priori static environments. Indeed, let's Method Stochasticity Efficiency Score Timesteps Prediction error No features BIB005 No HD 1 forward model ∼ 160 200M VAE Dynamic-AE BIB001 No Forward model / AE 0 5M Random features BIB005 No Forward model ∼ 250 200M VAE features BIB005 No Forward model / VAE ∼ 450 200M ICM features BIB005 Yes Inverse model ∼ 160 200M Forward model 161 40M AR4E [Oh and Cavallaro, 2019] Yes Inverse model n/a n/a HD forward model EMI No No Forward model n/a n/a EX 2 Yes Discriminator n/a n/a CB No IB ∼ 1700 n/a VSIMR [Klissarov et al., ] No VAE n/a n/a ECO BIB006 Yes Siamese architecture n/a n/a Several Comparisons SMM Almost no VAE n/a n/a Discriminator Information gain VIME BIB002 Yes Bayesian forward model n/a n/a AKL BIB003 Yes Stochastic forward model n/a n/a Ensembles BIB008 Yes 5 LD forward models n/a n/a Table 3 : Comparison between exploration strategies with intrinsic motivation. Stochasticity indicates whether the model handles the white-noise problem (a deeper analysis is provided in §6.1). Efficiency refers to highly expensive models added to standard RL algorithm. Score is the mean score on Montezuma's revenge and the number of timesteps executed to achieve this score. We also integrate results of some methods tested in other papers than the original one. Our table does not pretend to be an exhaustive comparison of methods but tries to give an intuition on their relative advantages. We invite the reader to have a look to the original articles for a more thorough study. assume that the agent does not choose primitive actions directly, but options instead. If it has not learned options, it will be unable to distinguish them, thus it is as if the agent had no control on the environment. On the contrary, if its options are perfectly distinguishable in the state space, the agent has control on its environment. In fact, the issue is not about choosing the states maximizing empowerment, but about defining options which increase overall empowerment. We will come back to this point in Section 5.1.
A survey on intrinsic motivation in reinforcement learning <s> Learning a relevant state representation <s> A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Learning a relevant state representation <s> Most deep reinforcement learning techniques are unsuitable for robotics, as they require too much interaction time to learn useful, general control policies. This problem can be largely attributed to the fact that a state representation needs to be learned as a part of learning control policies, which can only be done through fitting expected returns based on observed rewards. While the reward function provides information on the desirability of the state of the world, it does not necessarily provide information on how to distill a good, general representation of that state from the sensory observations. State representation learning objectives can be used to help learn such a representation. While many of these objectives have been proposed, they are typically not directly combined with reinforcement learning algorithms. We investigate several methods for integrating state representation learning into reinforcement learning. In these methods, the state representation learning objectives help regularize the state representation during the reinforcement learning, and the reinforcement learning itself is viewed as a crucial state representation learning objective and allowed to help shape the representation. Using autonomous racing tests in the TORCS simulator, we show how the integrated methods quickly learn policies that generalize to new environments much better than deep reinforcement learning without state representation learning. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Learning a relevant state representation <s> We introduce a method to disentangle controllable and uncontrollable factors of variation by interacting with the world. Disentanglement leads to good representations and is important when applying deep neural networks (DNNs) in fields where explanations are required. This study attempts to improve an existing reinforcement learning (RL) approach to disentangle controllable and uncontrollable factors of variation, because the method lacks a mechanism to represent uncontrollable obstacles. To address this problem, we train two DNNs simultaneously: one that represents the controllable object and another that represents uncontrollable obstacles. For stable training, we applied a pretraining approach using a model robust against uncontrollable obstacles. Simulation experiments demonstrate that the proposed model can disentangle independently controllable and uncontrollable factors without annotated data. <s> BIB003
Learning a relevant state representation is the ability of the agent to project its raw state onto a feature space with meaningful properties (cf. §3.1). Random policies as well as task-specific policies only access a subset of the state space, which can prevent the construction of a disjoint state representation. Indeed the distribution and sequence of states reached by the agent strongly depends on the overall policy of the agent. Intrinsic motivation brings here high interests as it enables to construct a policy generating the right distribution of interactions. Generally, two successive states must be close in the built feature space. Taking into account states independently is not sufficient to produce an efficient representation. Moreover it is desirable to separate the different objects to which the agent can pay attention since it facilitates the learning process. We will study in these subsection how intrinsic motivation gives a valuable alternative to standard methods, by providing interactions in the environment that take into account the necessary temporality of the observations [Caselles-Dupré et al., 2019; BIB002 and their causal link. An other interest is the fact that no supervision is needed. State space as a measure. consider a goal-parameterized problem where the goal space is the state space. They propose a specific reward to learn a state representation for which the L2 distance between two states is proportional to the minimal number of actions needed to go from one state to the other. To do so, the intrinsic reward function is composed of two parts: the first part imposes that the agent reaches the goal with a binary reward, the second part constrains the distance between two consecutive steps to be around 0. However, they assume that a goal is provided in the state space and they lack more elaborated experiments showing the relevance of their approach. Ghosh et al. (2019) make use of intrinsic motivation in an other way. They assume that a goal-conditioned policy trained with intrinsic motivation is available (where the goal space is the state space). Then they use trajectories of this policy to learn a state space representation where the L2 distance between two goal states corresponds to the expected KL-divergence of policies from an uniform distribution of states. Interestingly, they manage to heavily differentiate subsets of the state space which are separated by a bottleneck. One feature for one object of interaction. BIB001 try to learn independent factors of variation in the embedding space. The goal is presented as a variation of one feature in the embedded space, which is learnt simultaneously with the policy. For example, such feature can be the light of a room, and a policy relative to this factor of variation can be the fact of switching it off or on. The reward is thus the maximization of the chosen variation factors in comparison with other variation factors. The agent manages to assimilate a factor of variation only to a deterministic static object but it is not clear how the agent can generalize across moving objects. This approach has been further extended to also represent factors of variation of uncontrollable features (an unalterable barrier for example) BIB003 . To conclude, although most of the work does not consider the learning of state representation as a task in itself BIB002 , it allows to construct a state space with meaningful properties. We strongly believe that an active learning process is required to understand properties of the world. Interesting events exhibiting these properties are rare with random actions whereas it can be common with specific goals. Typically, it is easier for an agent to distinguish two different objects if the agent tries to move them independently. It will take a lot longer if he just waits for one movement to accidentally happen. As an other example, he can only understand the concept of distance by moving towards objects.
A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> Formal exploration approaches in model-based reinforcement learning estimate the accuracy of the currently learned model without consideration of the empirical prediction error. For example, PAC-MDP approaches such as R-MAX base their model certainty on the amount of collected data, while Bayesian approaches assume a prior over the transition dynamics. We propose extensions to such approaches which drive exploration solely based on empirical estimates of the learner's accuracy and learning progress. We provide a "sanity check" theoretical analysis, discussing the behavior of our extensions in the standard stationary finite state-action case. We then provide experimental studies demonstrating the robustness of these exploration measures in cases of non-stationary environments or where original approaches are misled by wrong domain assumptions. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> Neuroscientific and psychological data suggest a close link between affordance and mirror systems in the brain. However, we still lack a full understanding of both the individual systems and their interactions. Here, we propose that the architecture and functioning of the two systems is best understood in terms of two challenges faced by complex organisms, namely: (a) the need to select among multiple affordances and possible actions dependent on context and high-level goals and (b) the exploitation of the advantages deriving from a hierarchical organisation of behaviour based on actions and action-goals. We first review and analyse the psychological and neuroscientific literature on the mechanisms and processes organisms use to deal with these challenges. We then analyse existing computational models thereof. Finally we present the design of a computational framework that integrates the reviewed knowledge. The framework can be used both as a theoretical guidance to interpret empirical data and design new experiments, and to design computational models addressing specific problems debated in the literature. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach. Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial. To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation. We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice. Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods (see videos at this https URL). <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> We explore methods for option discovery based on variational inference and make two algorithmic contributions. First: we highlight a tight connection between variational option discovery methods and variational autoencoders, and introduce Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection. In VALOR, the policy encodes contexts from a noise distribution into trajectories, and the decoder recovers the contexts from the complete trajectories. Second: we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution. Finally, we investigate other topics related to variational option discovery, including fundamental limitations of the general approach and the applicability of learned options to downstream tasks. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research. We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions. Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state. This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations. We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> Skill abstraction <s> In this work, we take a representation learning perspective on hierarchical reinforcement learning, where the problem of learning lower layers in a hierarchy is transformed into the problem of learning trajectory-level generative models. We show that we can learn continuous latent representations of trajectories, which are effective in solving temporally extended and multi-stage problems. Our proposed model, SeCTAR, draws inspiration from variational autoencoders, and learns latent representations of trajectories. A key component of this method is to learn both a latent-conditioned policy and a latent-conditioned model which are consistent with each other. Given the same latent, the policy generates a trajectory which should match the trajectory predicted by the model. This model provides a built-in prediction mechanism, by predicting the outcome of closed loop policy behavior. We propose a novel algorithm for performing hierarchical RL with this model, combining model-based planning in the learned latent space with an unsupervised exploration objective. We show that our model is effective at reasoning over long horizons with sparse rewards for several simulated tasks, outperforming standard reinforcement learning methods and prior methods for hierarchical reasoning, model-based planning, and exploration. <s> BIB008
Skill abstraction is the ability of an agent to learn a representation of diverse skills in an unsupervised way. Skills or goals generated by the agent are options (cf. §3.1). In comparison with multi-objective RL BIB001 , skills are here generated in an unsupervised way. In this work, the agent generally learns on two timescales: on the one hand it generates options and learns associated intraoptions policies using an intrinsic reward; on the other hand if a global objective (or task) exists, it will learn to use its skills to realise this global objective using the extrinsic reward associated to the task. One way to learn intra-options policies is to use UVFA (c.f §2.2). It's also possible to use HER ( §2.2) since the reward function R(s, a, s , g ) can be computed without additional interactions when we only use an intrinsic reward. Key aspects are first to learn interesting skills which can be transferred between several tasks. These skills can be even more transferable if they are uncorrelated from the learned task [Heess et al., 2016] . Second, temporal abstraction of executed actions through acquired skills makes the learning process easier. Let's take, as an example, MuJoCo BIB002 , which is a usually used environment in works related to skills. In this environment, the joints of the robot can be controlled by an agent to achieve, for example, locomotion tasks. The idea of some work is to generate skills like move forward or move backward with an intrinsic reward. These skills can then be used for a navigation task. In the following, we will present several works incorporating an expert reward in a hierarchical algorithm, demonstrating the potential of the approach. Then we will study two main research directions on self-generation of goals. The first one uses the state space to generate goals and compute the intrinsic reward; the second one uses information theory to generate skills based on a diversity heuristic. Between expert rewards and intrinsic rewards. Seminal work shows the interest of decomposing hierarchically actions. Among them, Kulkarni et al. (2016a) present the hierarchical-DQN in which the goal representation is expertly defined with tuples (entity, relation, entity2). An entity can be an object on the screen or an agent, and the relation notably refers to a distance. Therefore, the goal can be for the agent to reach an object. This reward is one if the goal is reached, zero otherwise. They show that it can help the learning process particularly when rewards are sparse like in Montezuma's revenge. However, by avoiding learning skill representation, this obfuscates the main problem: it is difficult to choose which features are interesting enough to be considered as goals in a large state space. Other works demonstrate the potential of the approach using auxiliary objectives specific to the task Formally, when the last state of the option is considered, these approaches compute the intrinsic reward with: where D is a distance function, g t is the goal chosen by an inter-option policy, and f a representation function which can be identity. When the direction is taken as intrinsic reward, it can be described with: where s f is the agent's state at the end of the option. BIB005 tries to bound sub-optimality of the goal representation, giving theoretical guarantees. The agent turns out to be able to learn everywhere by selecting important features for the task. Mutual information between goals and trajectories. The second approach does not need a distance function but rather consists in maximizing mutual information between a goal and its associated trajectory. With τ the trajectory during the option, s i the initial state, f a function selecting a part of the trajectory, g t a goal provided by an inter-option policy or sampled uniformly, we can compute the intrinsic reward as: Informally, it is about learning skill according to the ability of the agent do discern them from the trajectory (i.e. covered states) of the option's policy. The agent goes towards areas for which it can guess the option it has chosen. In other words, it enforces the building of diverse policies. SNN4HRL BIB004 learn skills by maximizing equation 23. Each goal is uniformly generated, so maximizing this equation is like minimizing H(g|f (τ )) (cf. equation 8). But this is equivalent to maximizing the intrinsic reward log q(g|f (τ )) (cf. equation 19). In order to compute the probability q, the state space is discretized into partitions, making it possible to count the number of visits of each partition for the current objective g. With this count, the agent can compute the probability q with a simple normalization. f (τ ) assigns states from the trajectory to their partition. Then, once the agent has learned the skills, it is integrated in a hierarchical structure in which a manager, or inter-option policy, chooses the goals to accomplish. Let us notice that the goal space is here discrete. VALOR BIB006 and DIAYN reflect the same idea, but differ from previous work firstly by using a neural network rather than a discretization to compute log q(g|f (τ )) and secondly in choosing f as a part of the trajectory of the skill in the environment. The agent manage to learn a locomotion task in a state space of more than 100 dimensions. Furthermore, they show the interest of this method as a pre-training for hierarchical reinforcement learning and as initialization for learning a task. DIAYN chooses f (τ ) as a state of the trajectory and computes the intrinsic reward at every iteration of the trajectory. VALOR distinguishes itself by considering f (τ ) as an aggregate of all states in the trajectory and by assigning the reward at the end of the trajectory. With VALOR, the agent manages to learn up to 10 different skills and up to 100 by gradually increasing the number of goals with a curriculum BIB006 . VIC BIB003 already did some experiments with the same approach, but on simpler environments and without exhibiting the same diversity of skills. Similar work to VALOR can be found in ], but it was tested on a simpler environment. Three main limits of these approaches can be identified. Firstly, it is hard to see how these methods could be applied on environments that are different from MuJoCo, which seems adequate to these methods since the agent often falls in the same state (on the floor) when the goal is uninteresting. Secondly, the agent is unable to learn to generate goals without unlearning its skills. This way, the goal distribution generated by the agent has to stay uniform BIB003 . Thirdly, none of these approaches tries to use a continuous embedding of trajectories. DISCERN BIB007 tackles the last issue and considers the goal space as a state space. Then it does an approximation of log q(g|c) by trying to classify the final state of the trajectory as the right goal among other goals selected from the same distribution as the real one. This is like learning to find the closest goal to the final state from a set of goals. SeCTAR BIB008 extends differently this approach: the agent learns to encode trajectories into a latent space, and to decode in the same way than a VAE. In addition, the trajectories generated by the latent-conditioned policy and those of the decoder learn to be consistent with each other. The advantage of this approach is that it can take advantage of the decoder to use it as a forward model at the option level. Doing so, it manages to get interesting results on simple environments using a planning method. The major limitation is the use of recurrent neural networks, which are known to be computationally expensive, of two different policies (one is exclusively used to explore and the other to change the trajectories distribution of the encoder-decoder) and the fact that the learning process is not carried out endto-end. Their decoder is particularly computationally ineffective for planning since it predicts the entire trajectory of the closed-loop option. DADS maximizes a similar objective and uses model predictive control (MPC) to plan on the behavior level. To maximize the objective, they rather compute the reward as log(q(s t+1 |g, s) which facilitates the use of a continuous goal space, but they rely on a stochastic parameterized distribution as predictive model and, as a result, it is not clear how well they perform without the access to the (x, y) coordinates. Hausman et al. (2018) propose a way to learn theses policies with an extension of the Retrace algorithm for off-policy learning, in the setting of multitask learning. They manage to learn several trajectories for one task by learning a distributions p(g|t) where t is the task. Although, after training on several tasks, learning a new distribution p(g|t ) is enough to solve a new task t , that is not a bottom-up approach, this is not studied in a hierarchical application and pre-training tasks have to be related to the new task to find an optimal policy. It should be mentioned that some articles try to maximize a similar diversity goal with a predictive model and distance function . However this is throwing away some advantages of this approach, which consists into avoiding a distance function and handling stochastic skills. We have stated at §4.2 that the empowerment of an agent improves as skills are being distinguished. Work presented here implicitly increases the empowerment of an agent, from the option policy point of view. Indeed, it maintains a high entropy on goals and associates a direction in the state space to a goal. Therefore, if a is an option, H(a|s) is maximal since the probability distribution is uniform, and H(a|s, s ) decreases as the agent learns to differentiate between options. Conclusion. To summarize, there are two main groups of work about self-generation of goals. The first ensemble considers its objectives as states, the advantage is then to have a continuous space enabling interpolation. The disadvantage is that it requires to find the right comparison metric and the right way to compress the state space. Otherwise, the agent is not able to let the inter-option policy produces high-dimensional actions and is unable to discern similar states from different states. The second ensemble takes advantage of information theory to partition trajectories. The option space has a limited size but intra-option policies suffer from catastrophic forgetting and skills are more stochastic. Table 4 summarizes the classification of methods which learn a goal embedding.
A survey on intrinsic motivation in reinforcement learning <s> Curriculum learning <s> Publisher Summary Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Curriculum learning <s> Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011) continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [53,54]. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Curriculum learning <s> Statistical spoken dialogue systems have the attractive property of being able to be optimised from data via interactions with real users. However in the reinforcement learning paradigm the dialogue manager (agent) often requires significant time to explore the state-action space to learn to behave in a desirable manner. This is a critical issue when the system is trained on-line with real users where learning costs are expensive. Reward shaping is one promising technique for addressing these concerns. Here we examine three recurrent neural network (RNN) approaches for providing reward shaping information in addition to the primary (task-orientated) environmental feedback. These RNNs are trained on returns from dialogues generated by a simulated user and attempt to diffuse the overall evaluation of the dialogue back down to the turn level to guide the agent towards good behaviour faster. In both simulated and real user scenarios these RNNs are shown to increase policy learning speed. Importantly, they do not require prior knowledge of the user's goal. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Curriculum learning <s> Autonomous multiple tasks learning is a fundamental capability to develop versatile artificial agents that can act in complex environments. In real-world scenarios, tasks may be interrelated (or "hierarchical") so that a robot has to first learn to achieve some of them to set the preconditions for learning other ones. Even though different strategies have been used in robotics to tackle the acquisition of interrelated tasks, in particular within the developmental robotics framework, autonomous learning in this kind of scenarios is still an open question. Building on previous research in the framework of intrinsically motivated open-ended learning, in this work we describe how this question can be addressed working on the level of task selection, in particular considering the multiple interrelated tasks scenario as an MDP where the system is trying to maximise its competence over all the tasks. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Curriculum learning <s> Learning about many things can provide numerous benefits to a reinforcement learning system. For example, learning many auxiliary value functions, in addition to optimizing the environmental reward, appears to improve both exploration and representation learning. The question we tackle in this paper is how to sculpt the stream of experience---how to adapt the system's behaviour---to optimize the learning of a collection of value functions. A simple answer is to compute an intrinsic reward based on the statistics of each auxiliary learner, and use reinforcement learning to maximize that intrinsic reward. Unfortunately, implementing this simple idea has proven difficult, and thus has been the focus of decades of study. It remains unclear which of the many possible measures of learning would work well in a parallel learning setting where environmental reward is extremely sparse or absent. In this paper, we investigate and compare different intrinsic reward mechanisms in a new bandit-like parallel-learning testbed. We discuss the interaction between reward and prediction learners and highlight the importance of introspective prediction learners: those that increase their rate of learning when progress is possible, and decrease when it is not. We provide a comprehensive empirical comparison of 15 different rewards, including well-known ideas from reinforcement learning and active learning. Our results highlight a simple but seemingly powerful principle: intrinsic rewards based on the amount of learning can generate useful behaviour, if each individual learner is introspective. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Curriculum learning <s> Goal-conditioned policies are used in order to break down complex reinforcement learning (RL) problems by using subgoals, which can be defined either in state space or in a latent feature space. This can increase the efficiency of learning by using a curriculum, and also enables simultaneous learning and generalization across goals. A crucial requirement of goal-conditioned policies is to be able to determine whether the goal has been achieved. Having a notion of distance to a goal is thus a crucial component of this approach. However, it is not straightforward to come up with an appropriate distance, and in some tasks, the goal space may not even be known a priori. In this work we learn a distance-to-goal estimate which is computed in terms of the number of actions that would need to be carried out in a self-supervised approach. Our method solves complex tasks without prior domain knowledge in the online setting in three different scenarios in the context of goal-conditioned policies a) the goal space is the same as the state space b) the goal space is given but an appropriate distance is unknown and c) the state space is accessible, but only a subset of the state space represents desired goals, and this subset is known a priori. We also propose a goal-generation mechanism as a secondary contribution. <s> BIB006
So far we have seen that option learning improves both exploration when learned in a bottom-up way, and the credit assignment; that the use of motivations may help to build state feature space with specific and helpful properties; and that intrinsic motivation may guide the agent towards novel states. However, these methods are not incompatible with each other. Goal-parameterized policies could benefit from both properties of the state space and exploration process at the inter-option policy level. Here, we emphasize some works at the intersection between these intrinsic motivation, particularly, the point is to usually explore the parameterized goal space in order to quickly learn a set of skills. The objective is here to learn to choose an objective which is neither too hard nor too easy to facilitate the learning of an agent. Specifically, this kind of work tries to learn an efficient curriculum among the tasks of an agent, the counterpart is that this work generally assumes more prior knowledge. To be efficient, the curriculum must avoid forgetting what an agent has previously learned and avoid trying to learn unlearnable tasks or fully learned tasks. Modelling the problem of choosing a task. A common way to choose a task to learn is to use the learning progress, which rewards the agent only if it is making progress. It is generally defined as the first order derivative of the performance: where o T is a task and T is the number of time a task has been chosen. Teacher-Student BIB001 ] models the problem of choosing a task as a non-stationary multi-armed bandit which aims to improve the learning progress of a task-specialized policy. The agent chooses an external task among a set of tasks and tries to solve it with a taskspecific reward. The authors propose to evaluate the learning progress with the coefficient of the slope of the reward progress computed with a simple linear regression through recent interactions. However, tasks are just a different setting of the same carefully designed objective. For example, the agent has to reach a similar target in a larger and larger labyrinth rather than going towards a target to pick up an object to use it somewhere else. It makes the policy easily generalisable. CURIOUS [Colas et al. , 2019] models the problem likewise, adds diverse hand-made tasks (for example, reaching a cube, pushing it into an other) and integrates hindsight experience replay. The learning progress is computed as the difference of rewards earned between two evaluation step, where an evaluation step consists in the mean reward obtained from the previous l episodes of the task. Even if the agent manages to learn across different tasks, it can take advantage of hindsight experience replay as long as tasks rewards overlap, or are even close to each other. IMGEP also describes a RL framework with a hand-made goal space where the agent selects the goal with a similar learning progress measure. M-GRAIL BIB004 treats interrelated tasks by modelling the problem as markovian and adding information on already solved goals into the high-level state. There are several issues: the agent learns a different policy per task, relies on low-level reward prediction and generalization to more diverse tasks is not clear. Among all these works, it appears that computing the learning progress is difficult and requires an evaluation step. Although not used with an RL agent, propose multiple other methods to compute the learning progress for learning to choose a task by leveraging the distribution model of a stochastic neural network. In particular, they introduce the variational complexity gain, which can be measured as the difference between two consecutive KL-divergence between a fixed prior and the posterior over the parameters of the model (the task learner). In the same way, BIB005 introduce the weight change with an adaptive learning rate. CLIC extends and improves over CURIOUS by no longer considering some predefined tasks but using an other intrinsic reward to manipulate objects. This reward is computed as the distance, for only one specific feature or object, between the current state and the final state of the agent. However, this work is based on the assumption that the state space is disentangled, i.e. each feature corresponds to the state of one object. Adversarial training. In the paradigm of adversarial training, two modules face each other: the first one, the generator, tries to fool the second, the discriminator, which must avoid to be mistaken. As they progress, the generator proposes more and more convincing data whereas the generator is getting more and more harder to fool. Goal-GAN learns to generate more and more complex continuous goals with a Generative Adversarial Networks (GAN) in order to make the policy progressively learn to go everywhere. The generator of the GAN learns to produce goals which are at intermediate difficulty and the discriminator learns to distinguish these goals from others. Intermediate difficulty is characterized as a goal that an agent succeeds from time to time. In this article, the intrinsic reward relies on an hand-engineered indicator function which attributes the binary reward if the agent is close to a goal. Also the parameterized goal space is assumed to be known, for example, they present the goal space as the (x, y) coordinates whereas the state space is larger. BIB006 extend GoalGAN and try to learn an embedding space where the L2 distance between two states is proportional to the number of action needed to go from one state to the other (similarly to work in section 4.3). To do so, they train a predictor of state distance on states separated by random actions. BIB003 apply adversarial method to learn a goal space which is a compressed state space. During a pre-training step, an agent (generator) tries to go into the state that an other agent (discriminator) went into, whereas the discriminator learns to go into areas the generator cannot reach. In other words, the generator tries to product trajectories that the discriminator can not differentiate from its own. Thus, the reward of the generator is one if a distance function between the discriminator's final state and its own final state is under a predefined threshold; the reward is reversed for the discriminator. Due to the used architecture, the generators implicitly learn to compress the goal state. This goal space can be used to solve an afterwards task in a hierarchical way. In addition, their mechanism can be used to improve exploration. However, this method is still limited by the need of an expert for designing the reward function. Conclusion. To conclude, exploration can also be determinant in a goal space. We identified two methods for doing that, which we summarize in table 5: the first one models the problem of choosing a task as a multi-armed bandit or as a MDP, the second one uses adversarial training to generate adequate tasks. It has been shown that it could significantly accelerate the skill acquisition. However most of this work relies on strong assumptions to measure the accomplishment of the option. We believe that further work will have to relax these assumptions. An interesting inspiration could be taken from Powerplay BIB002 . This is a theoretical and global framework beyond such assumptions which is continually searching for new tasks, however it still lacks a concrete application.
A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Publisher Summary Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use sequential density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary sequential density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. ::: Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. ::: ::: In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. ::: ::: Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL <s> BIB008 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as $\epsilon$-greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the $k$-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques. <s> BIB009 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. <s> BIB010 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/ <s> BIB011 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level. <s> BIB012 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. <s> BIB013 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Deep reinforcement learning methods traditionally struggle with tasks where environment rewards are particularly sparse. One successful method of guiding exploration in these domains is to imitate trajectories provided by a human demonstrator. However, these demonstrations are typically collected under artificial conditions, i.e. with access to the agent's exact environment setup and the demonstrator's action and reward trajectories. Here we propose a two-stage method that overcomes these limitations by relying on noisy, unaligned footage without access to such data. First, we learn to map unaligned videos from multiple sources to a common representation using self-supervised objectives constructed over both time and modality (i.e. vision and sound). Second, we embed a single YouTube video in this representation to construct a reward function that encourages an agent to imitate human gameplay. This method of one-shot imitation allows our agent to convincingly exceed human-level performance on the infamously hard exploration games Montezuma's Revenge, Pitfall! and Private Eye for the first time, even if the agent is not presented with any environment rewards. <s> BIB014 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> In this work, we take a representation learning perspective on hierarchical reinforcement learning, where the problem of learning lower layers in a hierarchy is transformed into the problem of learning trajectory-level generative models. We show that we can learn continuous latent representations of trajectories, which are effective in solving temporally extended and multi-stage problems. Our proposed model, SeCTAR, draws inspiration from variational autoencoders, and learns latent representations of trajectories. A key component of this method is to learn both a latent-conditioned policy and a latent-conditioned model which are consistent with each other. Given the same latent, the policy generates a trajectory which should match the trajectory predicted by the model. This model provides a built-in prediction mechanism, by predicting the outcome of closed loop policy behavior. We propose a novel algorithm for performing hierarchical RL with this model, combining model-based planning in the learned latent space with an unsupervised exploration objective. We show that our model is effective at reasoning over long horizons with sparse rewards for several simulated tasks, outperforming standard reinforcement learning methods and prior methods for hierarchical reasoning, model-based planning, and exploration. <s> BIB015 </s> A survey on intrinsic motivation in reinforcement learning <s> Limitations and challenges of the methods <s> Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration. <s> BIB016
Much work is limited by challenges out of the scope of RL, such as the performance of density models BIB003 BIB002 or predictive models [Nachum et al., 2018; BIB010 , or the difficulty to approximate mutual information between two continuous random variables BIB004 . These limitations are beyond the scope of this article and we focus here only on challenges related to RL. Despite the heterogeneity of work on intrinsic motivation in RL and the specific limitations of each method, we select and present in this section five major issues and challenges related to these approaches. Environment stochasticity. A lot of work in section 4.1 (related to exploration problem) create their reward with prediction error instead of the improvement of prediction error (see §7.2 for a thorough analysis). This discrepancy explains the difficulty of several works to handle the whitenoise effect BIB011 or, more generally, to handle the stochasticity of the environment. Some articles from the state of the art handle this issue (see table 3 ), but each of them has drawbacks: ICM BIB008 can not differentiate local stochasticity from long-term control. Count-based methods BIB002 BIB005 BIB012 can only handle regular stochasticity. To illustrate the concept of regular stochasticity, let's assume that one (state,action) tuple can lead to two very different states with 50% chance each. The algorithm can count for both states the passage number, then the algorithm would take twice as long to avoid to be too much attracted. However if the state is a new randomly generated one every time, it is not clear how these methods could resist the temptation of going into this area. State comparison BIB013 relies on a large number of comparison between states. It is unclear how it could scale to a larger number of states. can avoid distractors at the condition that the agent finds extrinsic rewards. Information gain BIB006 BIB009 seems particularly adequate since the agent only considers the progress in the learning of dynamics, and the speed of progress is related to the degree of stochasticity. However it is difficult to apply in practice as evidenced by its score. Similarly, related exploration methods (c.f §5.2) in the goal space uses a similar motivation, denoted as learning progress. We saw that it was particularly efficient but hard to compute. Additionally, BIB011 highlight that, even if an environment is not truly random, the agent can get stuck in some parts of the environment. To illustrate this, the authors placed a television in their 3D environment and added a specific action to randomly change the picture of the displayed picture. It turns out that their agent (ICM and prediction with random features) kept looking at the picture. It would be interesting to test a broader class of algorithms in order to test their abilities to handle such a difficult stochastic setting. In fact, there is a lack of distinction between stochasticity in the environment and uncertainty relative to environment dynamics, although the agent must act differently according to these two types. We will discuss it again in Section 7.4. Long-term exploration. To our knowledge, none of the existing approaches handles long-term information search. The most challenging used benchmarks in the current state of the art are DMLab and Montezuma's revenge, yet very sparse reward games such as Pitfall! are not currently addressed and should be investigated. In Pitfall!, the first reward is reached only after multiple rooms where it requires specific action sequences to go through each room. State of the art on intrinsic motivation methods BIB002 achieve 0 mean reward in this game. At the opposite, imitation RL methods BIB014 are insensitive to such a specific reward, and thus, exceed intrinsic motivation methods with a mean reward of 37232 on Montezuma's revenge and 54912 on Pitfall!. Even though these methods use expert knowledge, this performance gap exhibits their resilience to long-term rewards. Compared with intrinsic reward methods, which do no exceed a 10000 score on Montezuma's revenge and hardly achieve a score on Pitfall! BIB002 , it shows that intrinsic motivation is still far to solve the overall problem of exploration. Furthermore, we want to emphasize that the challenge is enhanced when the intrinsic reward itself is sparse BIB012 . In Montezuma's revenge, it is about avoiding to use a key too quickly in order to be able to use it later. In every day life, it can be about avoiding to spend money too quickly. In fact, it looks like an exploration issue in the exploration process. Intrinsic reward can guide the exploration at the condition that the agent finds this intrinsic reward. explain that such sparse intrinsic reward can result of a distant intrinsic reward coupled with catastrophic forgetting and action's stochasticity. This challenge could be solved with an approach using planning methods . Hierarchical skills learning could be a solution, transforming the long-term reward into a short-term reward with a multi-level skill hierarchy . This double intuition has already been developed by BIB015 but could be further deepened. Binding skills learning and exploration. For two reasons, we claim that skill learning can be an important source of improvement for the exploration issue. We already have investigated the direct interest of skill learning to explore ( §3.1) and found that it can reduce the noise of standard exploration methods resulting in a faster access to the sparse reward. In addition to that, we saw that skill learning makes the credit assignment more effective and faster. This is extremely important since an intrinsic reward can be a fast moving non stationary reward. If the long-term attenuation parameter γ is high, such a reward function could propagate along different states very slowly since the state sequence between the state we want to value and the rewarded state is very large. It results that the policy is improved very slowly. This is why pseudo-count methods use a mixed Monte Carlo update BIB002 , which consists in using a soft interpolation between Monte Carlo and TD method to update values. However their method only partially solves the problem on the cost of a higher variance. In a different way, if the fast moving non stationary intrinsic reward changes an abstract policy option, it can propagate to every states much faster without any additional cost. To illustrate this, let's assume that options of length 20 are available, and that the target state (with the highest intrinsic reward) is 1000 states away from the initial state. In a tabular setting, it would take at least 1000 updates with a γ of 0.998, whereas it would take only, at least, 50 updates to the option policy with a γ of 0.98. Building a practical state representation. There are several properties that a state representation should verify. As humans, we are aware of distance between states, we can easily segment objects, perceive their position and abstract them, understand objects affordance (i.e. potential high-level actions made possible by the properties of the item) [Thill et al., 2013] . We are also aware of our spatial position in the world on several timescales. In addition we easily integrate our hidden state such as past actions or past observations to make decisions. Our state representation is rich, and enables us to get goal-directed behaviors or object-directed exploration. Such abstractions is the foundation of our cognition, but they are still missing in intrinsic motivation approaches. This limitation is particularly salient throughout our survey. We have already seen that building a good feature space is important for discovering goals, in order to compose with a reduced goal space BIB010 or to get object-oriented behaviours BIB007 . It is particularly highlighted in the work of and where an access to the (x, y) coordinate strongly improves the quality of behaviors. It is also crucial in works related to knowledge acquisition to get a significant prediction error. For example, ICM BIB008 proposes an interesting state representation restricted on what can be controlled by the agent. But it is not clear whether the module learns an insufficient part of features determining the action or if it learns the whole set of features determined by the action. Moreover as noted by , the agent does not keep features on its long-term (not immediate) control. EMI manages to construct an embedding space where a simple linear forward model is adequate but without a specific structure. There is a large literature on learning representations [Lesort et al., 2018] , yet, there is currently few work which benefits from the recent advances in this area. While, on the other side, some work takes advantage of intrinsic motivation to learn representation spaces (see Section 4.3), we strongly believe that option policies or exploratory policies can take advantage of such representation space; For example, BIB007 takes advantage of a predefined object-centered representation to achieve good scores on benchmark as Montezuma's revenge. As an other example, tries to learn a disentangle state space, whereas this is a prior knowledge in CLIC . It seems that a lot of work could take advantage of each other. It results that state representation and intrinsic motivation may be more intertwined than previously believed and raise new questions: what mechanisms underpin the relationship between these two domains ? Given that it is a chickenand-egg problem, which one is first learned ? Decorrelating the goal learning process from the task. The advantage of decorrelating the learning of objectives from the learning of the task is to favor exploration and transfer learning. This is usually called bottom-up learning because skills are learned before the task. Typically, it can be for an agent to learn to walk before learning to reach an object; then it can reuse this walking behavior to fulfill other tasks. If this learning process has made significant progress, it is still difficult to learn simultaneously tasks and skills without enduring catastrophic forgetting BIB001 . Indeed, when the agent sequentially learns tasks, it forgets the first task while learning the next one. Some work already tackles the catastrophic forgetting problem but it has not, to our knowledge, been evaluated with intrinsic motivation and on a large number of tasks. More broadly, these aspects are relative to continual learning BIB016 , i.e. the agent's ability to continuously train throughout its lifespan.
A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at this https URL in order to facilitate experimental reproducibility and to encourage adoption by other researchers. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> We explore methods for option discovery based on variational inference and make two algorithmic contributions. First: we highlight a tight connection between variational option discovery methods and variational autoencoders, and introduce Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection. In VALOR, the policy encodes contexts from a noise distribution into trajectories, and the decoder recovers the contexts from the complete trajectories. Second: we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution. Finally, we investigate other topics related to variational option discovery, including fundamental limitations of the general approach and the applicability of learned options to downstream tasks. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Review of tasks <s> Autonomous multiple tasks learning is a fundamental capability to develop versatile artificial agents that can act in complex environments. In real-world scenarios, tasks may be interrelated (or "hierarchical") so that a robot has to first learn to achieve some of them to set the preconditions for learning other ones. Even though different strategies have been used in robotics to tackle the acquisition of interrelated tasks, in particular within the developmental robotics framework, autonomous learning in this kind of scenarios is still an open question. Building on previous research in the framework of intrinsically motivated open-ended learning, in this work we describe how this question can be addressed working on the level of task selection, in particular considering the multiple interrelated tasks scenario as an MDP where the system is trying to maximise its competence over all the tasks. <s> BIB007
We identified four fundamentally different types of tasks on which intrinsic motivation methods are tested. In this subsection we emphasize their particularities and the solving algorithm proposed in the literature. Locomotion. Locomotion tasks are mostly related to MuJoCo environments such as ant or humanoid where the goal of the task is to move an agent BIB001 . Most related work consider exploration and skill acquisition methods. Exploration methods only solve easy locomotion tasks, e.g. Half-Cheetah having a 20-dim observation space and 6-dim action space BIB002 . On the other side, skill acquisition methods manage to learn to move forward (by crawling or walking) on harder morphologies, e.g. Ant having a 125-dim observation space and a 8-dim action space BIB004 . Interestingly, a diversity heuristic without extrinsic reward suffices to the learning process. It suggests that diversity heuristic could be enough to handle proprioceptive incoming data. However, currently, too much useless skills are learnt BIB004 . Manipulation. Manipulation tasks can be about moving, pushing, reaching objects for a movable robotic arm. Few exploration methods have been tested and they only manage to touch and move some objects. It is particularly interesting as a skill acquisition BIB005 but this is not actually a major focus since it lacks objectoriented objective (as argued in §6.1). It is a standard task for curriculum learning algorithms [Colas et al., 2019; BIB007 since, for example, an agent has to learn to reach an item before moving it, but these methods rely on a hand-made goal space. Navigation. Navigation tasks concern moving an agent in a maze. This is the broadly tested task and includes every kind of methods we presented. It can consist in moving a MuJoCo ant or swimmer in order to pick up food or to reach a target area. In the same way, Atari games generally consist into moving an agent into a rich environment, but with simpler discrete action space. Similarly to manipulation tasks, it requires target-oriented behaviors and to bring forward skills as states rather than diversity heuristic (despite a lot of progress in this way made by ). Exploration methods are particularly efficient in discovering new areas and make sense, but are brute force and could be considerably improved as discussed in sections 6.1 and 6.1. First-person view navigation. First-view maze environments (Vizdoom, DMLab) are particularly challenging since the agent only receives a partial first-person visual view of its state and must learn its true state (e.g. its position). There are few work addressing these environments, mostly for exploration BIB003 BIB006 , but they manage to efficiently explore the environment BIB006 . It lacks an application of count-based methods showing whether partial observability is a drag for the method. Nevertheless, standard RL methods could take advantage of breaking down the partial observability into a long-term one at the higher hierarchy's level, and into a short-term one at a lower hierarchy's level. It could make the training of a recurrent neural network easier by shortening the gap between a notable event and the moment we need to retrieve it in memory to get a reward. For example, in a 3D maze where the agent tries to reach an exit, a long-term memory could memorize large areas the agent went into whereas the short-term memory could focus on short time coherent behaviors.
A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use sequential density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary sequential density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as $\epsilon$-greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the $k$-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach. Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial. To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation. We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice. Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods (see videos at this https URL). <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> We explore methods for option discovery based on variational inference and make two algorithmic contributions. First: we highlight a tight connection between variational option discovery methods and variational autoencoders, and introduce Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection. In VALOR, the policy encodes contexts from a noise distribution into trajectories, and the decoder recovers the contexts from the complete trajectories. Second: we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution. Finally, we investigate other topics related to variational option discovery, including fundamental limitations of the general approach and the applicability of learned options to downstream tasks. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> Mutual information as a common tool <s> In this work, we take a representation learning perspective on hierarchical reinforcement learning, where the problem of learning lower layers in a hierarchy is transformed into the problem of learning trajectory-level generative models. We show that we can learn continuous latent representations of trajectories, which are effective in solving temporally extended and multi-stage problems. Our proposed model, SeCTAR, draws inspiration from variational autoencoders, and learns latent representations of trajectories. A key component of this method is to learn both a latent-conditioned policy and a latent-conditioned model which are consistent with each other. Given the same latent, the policy generates a trajectory which should match the trajectory predicted by the model. This model provides a built-in prediction mechanism, by predicting the outcome of closed loop policy behavior. We propose a novel algorithm for performing hierarchical RL with this model, combining model-based planning in the learned latent space with an unsupervised exploration objective. We show that our model is effective at reasoning over long horizons with sparse rewards for several simulated tasks, outperforming standard reinforcement learning methods and prior methods for hierarchical reasoning, model-based planning, and exploration. <s> BIB008
A redundancy seems to appear throughout the whole study, whether it is on knowledge acquisition or skills learning. Mutual information seems to be central to expand agent's abilities. Direct use of mutual information. We have first seen that empowerment is entirely defined with mutual information (cf. §4.2). Similarly, a whole section of work in §5.1 is based on mutual information between the path resulting from the goal and the goal itself. VIME BIB002 and AKL BIB004 maximize information gain, i.e. the information contained in the next state about the environment model I(s t+1 ; Θ|, a t ) where Θ are the parameters of the forward model. At last, EMI and CB ] make use of mutual information to compute the state representation. Although it is not an intrinsic motivation work, Still and Precup (2012) suggest that the agent has to maximize mutual information between its action and the next states to improve its exploration policy. Function equivalent to mutual information. Prediction error BIB006 Nachum et al., 2018; BIB005 is also related to mutual information [de Abril and Kanai, 2018] , since it is very close to information gain methods. In the same way, they try to maximize the information that a forward model contains about its environment but are limited by its inability to encode stochasticity. In addition to that, BIB006 explain that their method learns a state representation maximizing mutual information between the state in question and next states. At last, show that rewards which come from pseudo-count BIB003 BIB001 are closed to the one from information gain. Finally, as noted by BIB007 , the VAE objective is a specific case of the variational information bottleneck, which fully relies on two mutual information terms. It results that most work using this type of autoencoder is using the same mutual information tool (e.g BIB008 ).
A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/ <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach. Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial. To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation. We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice. Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods (see videos at this https URL). <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Intrinsic motivation as information compression <s> Goal-conditioned policies are used in order to break down complex reinforcement learning (RL) problems by using subgoals, which can be defined either in state space or in a latent feature space. This can increase the efficiency of learning by using a curriculum, and also enables simultaneous learning and generalization across goals. A crucial requirement of goal-conditioned policies is to be able to determine whether the goal has been achieved. Having a notion of distance to a goal is thus a crucial component of this approach. However, it is not straightforward to come up with an appropriate distance, and in some tasks, the goal space may not even be known a priori. In this work we learn a distance-to-goal estimate which is computed in terms of the number of actions that would need to be carried out in a self-supervised approach. Our method solves complex tasks without prior domain knowledge in the online setting in three different scenarios in the context of goal-conditioned policies a) the goal space is the same as the state space b) the goal space is given but an appropriate distance is unknown and c) the state space is accessible, but only a subset of the state space represents desired goals, and this subset is known a priori. We also propose a goal-generation mechanism as a secondary contribution. <s> BIB007
Schmidhuber (2008) postulates that the organism is guided by the desire to compress information it receives. Therefore, the more we manage to compress received data from the environment, the more the intrinsic reward is high. Nevertheless, he indicates that this is the improvement which is important, and not the compression degree in itself, or an agent could decide to stay inactive in front of the noise or an uniform darkness. As noticed by Schmidhuber (2007) , a breakthrough in the compression progress is called a discovery. Data compression is strongly linked to the observation of regularities in these very same data. For example, what we call a face is, in our environment, an ensemble appearing in a recurrent basis and composed of an oval shape containing two eyes, a nose and a mouth. Likewise, a state of the environment can be described with only some of the more pertinent features. Emphasizing this aspect makes this paradigm close to the minimum description length principle BIB002 which considers learning as finding the shortest description of data. In our case, it implies that intrinsic motivation results in a search for new regularities in the environment. It has been shown that methods on information gain are directly linked to information compression progress BIB001 . ECO BIB004 tries to encode the environment by storing the more diverse states as possible; and predictive models BIB005 encode environment dynamics in a parameterized model (often a neural network). The empowerment is similar, it should be recalled that this is about directing an agent towards areas in which it has control. i.e. in which states are determined by agent actions. It is possible to reformulate the empowerment as the interest of an agent for areas where its actions are a compression of the next states. Indeed, the empowerment is maximal if every path leads to their own states (always the same in the same order) distinct from those of other trajectories whereas it is minimal if all trajectories lead to the same state. Some work on skill abstraction explicitly tries to compress trajectories into a goal space. If they use the state space as goal space, we saw that the challenge was to correctly compress the space into a usable one. This leads to a part of work which rely on the quality of compression of the state space BIB007 BIB006 BIB003 .
A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use sequential density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary sequential density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge. <s> BIB004 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL <s> BIB005 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> We propose position-velocity encoders (PVEs) which learn---without supervision---to encode images to positions and velocities of task-relevant objects. PVEs encode a single image into a low-dimensional position state and compute the velocity state from finite differences in position. In contrast to autoencoders, position-velocity encoders are not trained by image reconstruction, but by making the position-velocity representation consistent with priors about interacting with the physical world. We applied PVEs to several simulated control tasks from pixels and achieved promising preliminary results. <s> BIB006 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> We explore methods for option discovery based on variational inference and make two algorithmic contributions. First: we highlight a tight connection between variational option discovery methods and variational autoencoders, and introduce Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection. In VALOR, the policy encodes contexts from a noise distribution into trajectories, and the decoder recovers the contexts from the complete trajectories. Second: we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution. Finally, we investigate other topics related to variational option discovery, including fundamental limitations of the general approach and the applicability of learned options to downstream tasks. <s> BIB007 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/ <s> BIB008 </s> A survey on intrinsic motivation in reinforcement learning <s> Prior knowledge <s> Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. <s> BIB009
Globally, models investigated often have as common point to be composed of two modules: 1. The first one is a module which is computing the intrinsec rewards with an evaluation function between actions and states covered by the agent and an other source of data. Actions and states covered by the agent can be its last trajectory BIB007 , its last action BIB002 BIB005 BIB008 , the number of times each states has been wandered BIB004 BIB001 or the last trajectories BIB009 . The other source of data can be an objective BIB007 , or the next states BIB005 Mohamed and Rezende, 2015] , . . . . This evaluation function is a causality function often implemented with neural networks in order to generalize across large state space. 2. The second module is a policy maximizing the intrinsic reward coming from the evaluation function. This study on the causality between data is possible because the work uses prior knowledge on the structure of the data, i.e. the structure of the world. We have identified several types of prior knowledge: • the environment is not entirely stochastic; • the environment is fully deterministic (see §6.1); • an observation is composed of several independent features (state representation); • actions can act as a metric in the state space (state representation); • there is a hierarchical structure inside available tasks or accessible states (curriculum learning). In fact there is here a strong analogy with works on state representation [Lesort et al., 2018] , which often incorporates reasonable assumptions to build a usable representation (e.g BIB003 , BIB006 ). Using this knowledge is not necessarily negative since it is about the structure of the world. To briefly summarize, a top view shows that one module takes advantage of a very global and task-independent knowledge on the structure of the world, through tools such as information theory, to compress incoming data. A measure of this compression serves as an intrinsic reward to enhance the reinforcement algorithm.
A survey on intrinsic motivation in reinforcement learning <s> Free-energy principle <s> Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011) continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [53,54]. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Free-energy principle <s> We present a model for the autonomous and simultaneous learning of smooth pursuit and vergence eye movements based on principles of efficient coding. The model accounts for the joint development of visual encoding and eye movement control. Sparse coding models encode the incoming data and capture the statistics of the input in spatio-temporal basis functions while a reinforcement learner generates eye movements to optimise the efficiency of the encoding. We consider the embodiment of the approach in the iCub simulator and demonstrate the emergence of a self-calibrating smooth pursuit and vergence behaviour. Unlike standard computer vision approaches, it is driven by the interaction between sensory encoding and eye movements. Interestingly, our analysis shows that the emerging representations learned by this model are in line with results on velocity and disparity tuning properties of neurons in visual cortex. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Free-energy principle <s> In this work, we take a representation learning perspective on hierarchical reinforcement learning, where the problem of learning lower layers in a hierarchy is transformed into the problem of learning trajectory-level generative models. We show that we can learn continuous latent representations of trajectories, which are effective in solving temporally extended and multi-stage problems. Our proposed model, SeCTAR, draws inspiration from variational autoencoders, and learns latent representations of trajectories. A key component of this method is to learn both a latent-conditioned policy and a latent-conditioned model which are consistent with each other. Given the same latent, the policy generates a trajectory which should match the trajectory predicted by the model. This model provides a built-in prediction mechanism, by predicting the outcome of closed loop policy behavior. We propose a novel algorithm for performing hierarchical RL with this model, combining model-based planning in the learned latent space with an unsupervised exploration objective. We show that our model is effective at reasoning over long horizons with sparse rewards for several simulated tasks, outperforming standard reinforcement learning methods and prior methods for hierarchical reasoning, model-based planning, and exploration. <s> BIB003 </s> A survey on intrinsic motivation in reinforcement learning <s> Free-energy principle <s> Extensive studies have shown that many animals’ capability of forming spatial representations for self-localization, path planning, and navigation relies on the functionalities of place and head-direction (HD) cells in the hippocampus. Although there are numerous hippocampal modeling approaches, only a few span the wide functionalities ranging from processing raw sensory signals to planning and action generation. This paper presents a vision-based navigation system that involves generating place and HD cells through learning from visual images, building topological maps based on learned cell representations and performing navigation using hierarchical reinforcement learning. First, place and HD cells are trained from sequences of visual stimuli in an unsupervised learning fashion. A modified Slow Feature Analysis (SFA) algorithm is proposed to learn different cell types in an intentional way by restricting their learning to separate phases of the spatial exploration. Then, to extract the encoded metric information from these unsupervised learning representations, a self-organized learning algorithm is adopted to learn over the emerged cell activities and to generate topological maps that reveal the topology of the environment and information about a robot’s head direction, respectively. This enables the robot to perform self-localization and orientation detection based on the generated maps. Finally, goal-directed navigation is performed using reinforcement learning in continuous state spaces which are represented by the population activities of place cells. In particular, considering that the topological map provides a natural hierarchical representation of the environment, hierarchical reinforcement learning (HRL) is used to exploit this hierarchy to accelerate learning. The HRL works on different spatial scales, where a high-level policy learns to select subgoals and a low-level policy learns over primitive actions to specialize on the selected subgoals. Experimental results demonstrate that our system is able to navigate a robot to the desired position effectively, and the HRL shows a much better learning performance than the standard RL in solving our navigation tasks. <s> BIB004
Unlike previous methods, the free-energy principle BIB003 estimates that a common principle governs the learning of a predictive model and the choice of actions: the agent tries to reduce its surprise. This way, the actions have to be chosen so as to avoid any prediction error. Typically, it can explain some social behaviors in the infant such as imitative behaviors [Nagai, 2019; Triesch, 2013] . Similar idea is exploited through the name of active efficient coding BIB004 : the agent acts in order to get compressed sensory experiences. This framework has been proved to be effective to model eye movements such as vergence eye movements and stereo disparity perception or smooth pursuit eye behaviors BIB002 . A side effect is that an agent staying motionless in the darkness would minimize its prediction error. A priori, it is difficult to determine how it could be compatible with actual methods. In fact, what lies behind most works we studied in part 4.1 on exploration is an adversarial perspective: a module learns to decrease the evaluation function while a reinforcement algorithm pushes the agent towards difficult areas challenging the first module. It is more ambivalent in §5.1 on skills learning. The goals are learnt by maximizing the information conveyed by the trajectory on the goal, but it is precisely the fact of choosing uniformly goals against this principle which allows the learning (see section 5.1). On the other side, the learning module and reinforced agent maximize the same objective, which is the probability of being in the chosen option knowing states covered by the intra-option policy. BIB001 differentiate two types of ambiguity: the first one is the uncertainty about the hidden state of the environment, the second one is the uncertainty about the model parameters. In other words, an agent can be certain about the uncertainty of the world. An agent should try to disambiguate this hidden state via active inference, i.e. find observations where accounting for what to do. For example, an agent should check if there is a hole in front of him by knowing if there is a chance there might be one. On the opposite, active learning pushes the agent towards regions where the agent can gain information about the world, for example, the agent will be incited to push a button if it does not know what this button does. To our knowledge, reinforcement learning has still not been applied to such setting.
A survey on intrinsic motivation in reinforcement learning <s> Towards developmental learning <s> Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. <s> BIB001 </s> A survey on intrinsic motivation in reinforcement learning <s> Towards developmental learning <s> Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration. <s> BIB002 </s> A survey on intrinsic motivation in reinforcement learning <s> Towards developmental learning <s> Extensive studies have shown that many animals’ capability of forming spatial representations for self-localization, path planning, and navigation relies on the functionalities of place and head-direction (HD) cells in the hippocampus. Although there are numerous hippocampal modeling approaches, only a few span the wide functionalities ranging from processing raw sensory signals to planning and action generation. This paper presents a vision-based navigation system that involves generating place and HD cells through learning from visual images, building topological maps based on learned cell representations and performing navigation using hierarchical reinforcement learning. First, place and HD cells are trained from sequences of visual stimuli in an unsupervised learning fashion. A modified Slow Feature Analysis (SFA) algorithm is proposed to learn different cell types in an intentional way by restricting their learning to separate phases of the spatial exploration. Then, to extract the encoded metric information from these unsupervised learning representations, a self-organized learning algorithm is adopted to learn over the emerged cell activities and to generate topological maps that reveal the topology of the environment and information about a robot’s head direction, respectively. This enables the robot to perform self-localization and orientation detection based on the generated maps. Finally, goal-directed navigation is performed using reinforcement learning in continuous state spaces which are represented by the population activities of place cells. In particular, considering that the topological map provides a natural hierarchical representation of the environment, hierarchical reinforcement learning (HRL) is used to exploit this hierarchy to accelerate learning. The HRL works on different spatial scales, where a high-level policy learns to select subgoals and a low-level policy learns over primitive actions to specialize on the selected subgoals. Experimental results demonstrate that our system is able to navigate a robot to the desired position effectively, and the HRL shows a much better learning performance than the standard RL in solving our navigation tasks. <s> BIB003
We have seen in sections 4 and 5 how intrinsic motivation enables to overcome multiple issues. Until now, we mostly focused our analysis relatively to reinforcement learning problems individually. However, we would like a more general guideline making our agent more intelligent and efficient to solve the tasks presented to it. As noticed in Guerin (2011), a safe path to build intelligence is to follow human development, that is what we call a developmental approach. A developmental architecture is based on the agent's embodiment which postulates that an agent must be grounded in its environment through sensorimotor capacities BIB003 . The model we described in §3.2 is in line with this principle. According to , everything is grounded in primitive sensor motor patterns of activation. This everything refers to the structure of the world and agent's affordance; this is exactly what our first module ( §7.3) strives to find out by compressing data it receives ( §7.2). In fact, we can notice that all challenges of DRL tackled by intrinsic motivation are the one addressed by developmental learning. More precisely, developmental learning refers to the ability of an agent to develop itself in an open-ended manner BIB001 ; it is especially relative to the autonomous learning of increasingly more complex and abstract skills by interacting with the environment . There are several key components in such a developmental process: the agent has to form concept representations and abstract reusable skills , use it as a basis for new skills [Prince et al., 2005] , explore the environment to find new interesting skills, autonomously self-generate goals in accordance with the level and morphology of the agent. All these key components of a developmental process find a match with the RL's issues we reviewed that intrinsic motivation manages to solve, at least partially. We will now exhibit how a developmental architecture could emerge from this work. Figure 2 exhibits how different works mixing DRL and intrinsic motivation could be integrated in a developmental architecture. We will now detail the five intertwined components. The core of the potential developmental architecture could be based on skill abstraction (section 5.1) since it encourages the agent to hierarchically build skills and represent them from scratch. It provides a goal space which can either come from the state space or have a subjective meaning, and an intra-option policy through an intrinsic reward function. It is particularly complementary with curriculum learning work (section 5.2) that can accelerate the learning process but until now, mostly relies on an hand-defined goal space with only few different tasks. It results that the integration of both could enable an accelerated autonomous creation of skills in an open-ended way. Ideally, both methods should be integrated in a continual learning framework BIB002 (see also section 6.1). Getting new interesting skills is not obvious depending on the task environment (see §6.2). That is why and mix exploration and skill embedding to improve the quality of skills. We think that the idea should be further explored. Typically, one could use exploration methods to find new goals, which can be to move an object or to reach an area, as illustrated by navigation tasks or manipulation tasks (see §6.2 and §6.2) . At last, state representation is a critical component for all methods since it is primordial to both explore an abstract space and get abstract goal oriented behaviors. For example, to get object-oriented behaviors (moving toward an object for example), the agent must have notions of object in a way or an other. It can make exploration and skill acquisition a lot easier and meaningful ( §6.1). , in his theory on cognitive development, argues that humans progress through four developmental stages. The first one is the sensorimotor stage which lasts from the birth of a baby to his second year. Guerin (2011) points out that the child learns to use knowledge on the world to modify his skills. As an example, once a child understood spatial movements, he can take advantage of this knowledge to shift an object with a stick. This kind of adaptation is mainly unused in current works. Nevertheless, some exploration methods learn, for example, a large predictive model, without using this accumulated knowledge. It emphasizes the current under-exploitation of exploration methods and the lack of guidance from knowledge when the agent chooses skills and trains on it. A detailed study on elements of such knowledge (intuitive physics, causality,...) can be found in .
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Copyright © 2007 by D. Jung and P. Tsiotras. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Unmanned aerial vehicles (UAVs) are increasingly used in military and scientific research. Some miniaturized UAVs rely entirely on the Global Positioning System (GPS) for navigation. GPS is vulnerable to accidental or deliberate interference that can cause it to fail. It is not unusual, even in a benign environment, for a GPS outage to occur for periods of seconds to minutes. For UAVs relying solely on GPS for navigation such an event can be catastrophic. This paper proposes an extended Kalman filter approach to estimate the location of a UAV when its GPS connection is lost, using inter-UAV distance measurements. The results from a recent trial conducted by DSTO in Australia with three UAVs are presented. It is shown that the location of a manoeuvering UAV that has lost the GPS signal can be determined to an accuracy of within 40m of its true location simply by measuring the range to two other UAVs at known location, where the range measurement error has a zero mean and a standard deviation of 10m. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> This paper presents a method for control of formations of Unmanned Aerial Vehicles (UAVs) in urban environments with several obstacles. Therefore the trajectories for each UAV are planned using mixed integer quadratic programming (MIQP) to describe a minimization problem. The result of this minimization problem then characterizes a collision free trajectory for each UAV using the commanded formations to fulfil the missions. The description of the UAVs, the inter UAV collision avoidance, the collision avoidance with obstacles as well as the description of formations will be shown in detail together with some simulation results in this paper. In addition the introduction explains the fields of interest in such formations of UAVs and what kind of advantage they can bring in comparison to today's solutions. The novelty in the approach in this paper is the description of formations of UAVs used in combination with MIQP to change formations, to add additional UAVs into an existing formation and to split formations, simply by changing some parameters in the description of the formation. <s> BIB003 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> This paper introduces the usage and development of mobile ad hoc UAV (unmanned aerial vehicles) network firstly, then, brings forward some problems of ad hoc UAV network presently, and gives some related research topics about mobile ad hoc UAV fleet network, especially routing protocol and date integrity technologies. Finally, give a conclusion about our preliminary studies and this paper. <s> BIB004 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> We address the problem of multi-UAV-based surveillance in complex urban environments with occlusions. The problem consists of controlling the flight of UAVs with on-board cameras so that the coverage and recency of the information about the designated area is maximized. In contrast to the existing work, sensing constraints due to occlusions and UAV motion constraints are modeled realistically and taken into account. We propose a novel \emph{occlusion-aware} surveillance algorithm based on a decomposition of the surveillance problem into a variant of the 3D art gallery problem and an instance of traveling salesman problem for Dubins vehicles. The algorithm is evaluated on the high-fidelity \textsc{AgentFly} UAV simulation testbed which accurately models all constraints and effects involved. The results confirm the importance of occlusion-aware flight path planning, in particular in the case of narrow-street areas and low UAV flight altitudes. <s> BIB005 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> This paper addresses differential global positioning system (DGPS)/inertial measurement unit (IMU) integration-based geolocation system developed for airborne remote sensing cameras. First, we provide a brief review on sensor calibration, alignment and sensor fusion as background material of this research. After presenting those background material, as a main part of this paper we present a geolocation algorithm designed for an airborne imaging system. The geolocation system developed is tested through actual airborne experiments. For the verification of the geolocation system developed, we compare initial stationary states of the airplane before-taking off with states after-landing. From the actual test results, we find that it is critical to do an accurate time synchronization between IMU, DGPS, and airborne images, and to compensate for the data delay occurred during the network transfer. <s> BIB006 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> The design of an experimental AGPS-based (Assisted Global Positioning System) elderly tracking system is described. The system includes: a wearable AGPS terminal with HSPA two-way communication capability and designed for 10 days of continuous battery operation, a GPS assistance data server with reference GPS stations, location database and server, application server, and web server and client. Assistance data is retrieved by the wearable AGPS terminal using the SUPL protocol (Secured User Plane Location). This paper describes the design of each component based on key considerations such as accuracy, availability, battery life-time, and user behavior. <s> BIB007 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Forest fires are a challenging problem for many countries. They often cause economical lost and ecological damage, and they can sometimes even cost human lives. Finding hot spots immediately after a fire is an important part of fighting forest fires. The main objective is to obtain a temperature map of the burned area, to locate the most critical embers. This information can help firefighter managers make the correct decisions about ground crew movements. The pervasive application described in this article lets firefighters obtain images of hot spots directly from an unmanned aircraft and receive commands from their manager through a communication network. Every firefighter holds a personal electronic device (PED), which includes a touch screen, Wi-Fi connectivity, a GPS receiver, and temperature sensors. Because terrain conditions such as abrupt ravines, rocks, and dense vegetation can introduce obstacles to connectivity, a balloon with a Wi-Fi device is tethered to every firefighter's truck to improve communication. In addition, a fixed-wing unmanned aircraft augments the number of communication layers to three. This article studies the quality of this three-layered network in maintaining the application's bandwidth requirements. <s> BIB008 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> In this paper we consider a collection of mobile single-antenna ground nodes communicating with a multi-antenna unmanned aerial vehicle (UAV) on a multiple-access ground-to-air wireless communications link. The UAV uses beamforming to mitigate the inter-user interference and achieve spatial division multiple access (SDMA). In addition, the UAV dynamically adjusts its heading in order to maximize a lower bound on either the ergodic sum rate of the uplink channel or the minimum ergodic rate of the worst-case user, using a Kalman filter to track the positions of the mobile ground nodes. We present simulation results that demonstrate the benefits of adapting the UAV heading in order to optimize the uplink communications performance. <s> BIB009 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Abstract In this paper we investigate new schemes to improve the operability of heterogeneous Unmanned Aircraft Systems (UAS) teams through the exploitation of inter-vehicular communications. Releasing ground links from unnecessary data exchanges saves resources and alleviates the inherent scalability problem resulting from the increase in the number of UAS to be controlled simultaneously. An architecture based on an aerial Mobile Ad-hoc Network (MANET) is proposed for the management of the data exchange among all the vehicles in the team. A performance evaluation of the two most relevant MANET approaches for path discovery (namely, reactive and proactive) has been carried out by means of simulation of two well-known routing protocols: Ad-hoc On-demand Distance Vector (AODV) and Destination Sequenced Distance Vector (DSDV). Several network configurations are generated to emulate different possible contingencies that might occur in real UAS team operations. Network topology evolution, vehicle flight dynamics and data traffic patterns are considered as input parameters to the simulation model. The analysis of the system behavior for each possible network configuration is used to evaluate the appropriateness of both approaches in different mission scenarios. Finally, an assessment of the simulation results is presented along with a discussion about further research challenges. <s> BIB010 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> This paper proposes a method that uses an aircraft with a single-antenna GPS receiver and Pitot tube to estimate wind speed and direction and to calibrate the airspeed. This sensor combination alone does not determine the true attitude of the aircraft, so the wind parameters cannot be obtained directly from the measurements. However, if the aircraft flies at different headings, such as in banking turns or circle maneuvers, the wind magnitude and direction can be estimated from the geometrical relation between the wind and the measurements. An extended Kalman filter (EKF) is applied to estimate wind parameters. The EKF can also estimate the scaling factor used to convert dynamic pressure to airspeed. This is useful for the operation of small unmanned aerial vehicles (UAVs) because of difficulty in determining the airspeed scaling factor of a low-cost UAV. Simulations are performed for a constant 2-D wind. To test the effectiveness of the proposed method, flight tests of a small UAV are conducted. Simulations and flight test results show that the proposed method is effective. <s> BIB011 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> To provide and improved remote sensing a system based on an autonomous UAV was developed. The system was based on an easily transportable helicopter platform weighing less than 14 kg. Equipped with a multi-spectral camera and autonomous system, the UAV system was capable of acquiring multi-spectral images at the desired locations and times. An extended Kalman filter (EKF) based UAV navigation system was designed and implemented using sensor fusion techniques. A ground station was designed to be the interface between a human operator and the UAV to carry out mission planning, flight command activation, and real-time flight monitoring. Based on the navigation data, and the waypoints generated by the ground station, the UAV could be automatically navigated to the desired waypoints and hover around each waypoint to collect field image data. An experiment using the UAV system to monitor turf grass glyphosate application demonstrated the system, which indicated the UAV system provides a flexible and reliable method of sensing agricultural field with high spatial and temporal resolution of image data. <s> BIB012 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> A new approach to route planning for joint search and track missions by coordinated unmanned aerial vehicles (UAVs) is presented. The cornerstone is a novel objective function that integrates naturally and coherently the conflicting objectives of target detection, target tracking, and vehicle survivability into a single scalar index for path optimization. This objective function is the value of information gained by the mission on average in terms of a summation, where the number of terms reflects the number of targets detected while how large each term is reflects how well each detected target is tracked. The UAV following the path that maximizes this objective function is expected to gain the most valuable information by detecting the most important targets and tracking them during the most critical times. Although many optimization algorithms exist, we use a modified particle swarm optimization algorithm along with our proposed objective function to determine which trajectory is the best on the average at detecting and tracking targets. For simplicity, perfect communication with centralized fusion is assumed and the problems of false alarm, data association, and model mismatch are not considered. For analysis, we provide several simplified examples along with a more realistic simulation. Simulation results show that by adjusting the parameters of the objective function, solutions can be optimized according to the desired tradeoff between the conflicting objectives of detecting new targets and tracking previously detected targets. Our approach can also be used to update plans in real time by incorporating the information obtained up to the time (and then reusing our approach). <s> BIB013 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> With the advances in computation, sensor, communication and networking technologies, utilization of Unmanned Aerial Vehicles (UAVs) for military and civilian areas has become extremely popular for the last two decades. Since small UAVs are relatively cheap, the focus is changing, and usage of several small UAVs is preferred rather than one large UAV. This change in orientation is dramatic, and it is resulting to develop new networking technologies between UAVs, which can constitute swarm UAV teams for executing specific tasks with different levels of intra and inter vehicle communication especially for coordination and control of the system. Setting up a UAV network not only extends operational scope and range but also enables quick and reliable response time. Because UAVs are highly mobile nodes for networking, setting up an ad-hoc network is a challenging issue, and this networking has some requirements, which differ from traditional networks, mobile ad-hoc networks (MANETs) and vehicular ad-hoc networks (VANETs) in terms of connectivity, routing process, services, applications, etc. In this paper, it is aimed to point out the challenges in the usage of UAVs as mobile nodes in an ad-hoc network and to depict open research issues with analyzing the opportunities and future works. <s> BIB014 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> The most common approach for border patrol operations is the use of human personnel and manned ground vehicles, which is expensive, at times inefficient and sometimes even hazardous to people involved. A better approach would be using Unmanned Aerial Vehicles (UAVs) in combination with such ground sensors. This would help improve the overall effectiveness of the surveillance system as a UAV could first scan the alert area before sending in personnel and vehicles, if deemed necessary. We propose border surveillance using multiple Unmanned Aerial Vehicles (UAVs) in combination with alert stations consisting of Unattended Ground Sensors (UGSs) along the border line/fence. Upon detecting an event, an alert would be triggered by any UGS. We simulate this process by reading probability data for different timestamps from a text file. And, based on utility values of each station, two UAVs decide on which alert station to service. <s> BIB015 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Drone-based delivery of goods could become a reality in the near future, as witnessed by the increasing successful experiences in both research and commercial fields. In this paper, a prototype system exploiting a do it yourself quad copter drone for delivering products is proposed. On the one hand, the hardware choices made in order to limit risks arising from autonomous delivery are presented. On the other hand, a framework for orders placement and shipment is shown. The advantages of a system like the one described in this paper are mainly related to an increased delivery speed, especially in urban contexts with traffic, to the possibility to make deliveries in areas usually difficult to be reached, and to the drone's ability to autonomously carry out consignments. A practical use case, in which the proposed system is used for delivering drugs (an application in which the need to quickly receive the good might be particularly important) is shown. Nevertheless, the proposed prototype could be employed in other contexts, such as take-away deliveries, product shipments, registered mail consignments, etc. <s> BIB016 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Recently, unmanned aerial vehicles (UAVs), or drones, have attracted a lot of attention, since they represent a new potential market. Along with the maturity of the technology and relevant regulations, a worldwide deployment of these UAVs is expected. Thanks to the high mobility of drones, they can be used to provide a lot of applications, such as service delivery, pollution mitigation, farming, and in the rescue operations. Due to its ubiquitous usability, the UAV will play an important role in the Internet of Things (IoT) vision, and it may become the main key enabler of this vision. While these UAVs would be deployed for specific objectives (e.g., service delivery), they can be, at the same time, used to offer new IoT value-added services when they are equipped with suitable and remotely controllable machine type communications (MTCs) devices (i.e., sensors, cameras, and actuators). However, deploying UAVs for the envisioned purposes cannot be done before overcoming the relevant challenging issues. These challenges comprise not only technical issues, such as physical collision, but also regulation issues as this nascent technology could be associated with problems like breaking the privacy of people or even use it for illegal operations like drug smuggling. Providing the communication to UAVs is another challenging issue facing the deployment of this technology. In this paper, a comprehensive survey on the UAVs and the related issues will be introduced. In addition, our envisioned UAV-based architecture for the delivery of UAV-based value-added IoT services from the sky will be introduced, and the relevant key challenges and requirements will be presented. <s> BIB017 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Unmanned aerial vehicles (UAVs) have enormous potential in the public and civil domains. These are particularly useful in applications, where human lives would otherwise be endangered. Multi-UAV systems can collaboratively complete missions more efficiently and economically as compared to single UAV systems. However, there are many issues to be resolved before effective use of UAVs can be made to provide stable and reliable context-specific networks. Much of the work carried out in the areas of mobile ad hoc networks (MANETs), and vehicular ad hoc networks (VANETs) does not address the unique characteristics of the UAV networks. UAV networks may vary from slow dynamic to dynamic and have intermittent links and fluid topology. While it is believed that ad hoc mesh network would be most suitable for UAV networks yet the architecture of multi-UAV networks has been an understudied area. Software defined networking (SDN) could facilitate flexible deployment and management of new services and help reduce cost, increase security and availability in networks. Routing demands of UAV networks go beyond the needs of MANETS and VANETS. Protocols are required that would adapt to high mobility, dynamic topology, intermittent links, power constraints, and changing link quality. UAVs may fail and the network may get partitioned making delay and disruption tolerance an important design consideration. Limited life of the node and dynamicity of the network lead to the requirement of seamless handovers, where researchers are looking at the work done in the areas of MANETs and VANETs, but the jury is still out. As energy supply on UAVs is limited, protocols in various layers should contribute toward greening of the network. This paper surveys the work done toward all of these outstanding issues, relating to this new class of networks, so as to spur further research in these areas. <s> BIB018 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Abstract The last decade has seen a growing interest in the use of Unmanned Aerial Vehicles (UAVs) for various applications and services. UAVs, or drones as referred to, have shown to be efficient in completing complex tasks when organized as ad hoc connected groups, thus forming a Flying Ad hoc Network (FANET). Although similar to Mobile Ad hoc Network (MANET) and Vehicular Ad hoc Network (VANET), FANETs have their own characteristics. One of the main difference is the fact that UAVs in general, but particularly when organized are FANETs, are mission-based, and their mobility models are often dictated by the purpose of their mission and the nature of the task they plan to accomplish. Therefore, routing protocols for FANETs should take into consideration the nature of the applications and services that the UAVs are deployed for, and factor in the mobility models. However, designing routing protocols for FANETs is not an easy task given the highly dynamic topology of FANETs and the flying constraints they are subjected to. Compared to topology-based routing, position-based routing demonstrated high efficiency and resilience to handle the high mobility of FANET nodes. To this end, in this paper, we propose a comprehensive survey of position-based routing protocols for FANETs with their various categories. We propose a classification and a taxonomy of these protocols, including a detailed description of the routing schemes used in each category. We propose a comparative study based on various criteria, and discuss the advantages and weaknesses of each protocol. Furthermore, new challenges for future research are presented, which introduce a new kind of coordination between UAVs and existing VANETs on the ground. The originality of this survey is that it complements the existing surveys on the same theme by providing more details on some aspects that have been addressed only ostensibly by other surveys in the literature. <s> BIB019 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Due to the high mobility of vehicles, the frequent path failures caused by dynamic network topology, and a variety of obstructions, efficient data dissemination with delay constraint in vehicular ad hoc networks (VANETs) is a challenging issue. To address these problems, a novel mobile relaying technique by employing unmanned aerial vehicles (UAVs) is considered to facilitate data dissemination in vehicular environments where the communication infrastructures are not available or the network connectivity is poor. This paper studies and formulates the throughput maximization problem in UAV-assisted VANETs, which aims to achieve high throughput while guarantee the delay constraint of data flows to the vehicles in the area. To maximize the network throughput, the maximization problem tries to find an optimal delivery strategy for data dissemination by optimizing the transmission rate. To solve the problem, the knapsack problem can be reduced to the maximization problem, which is proved NP-hard. A polynomial time approximation scheme is proposed to achieve an approximate solution. Detailed theoretical analysis including time complexity and approximation ratio of the proposed algorithm is presented. Simulation results demonstrate the effectiveness of the proposed algorithm. <s> BIB020 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Unmanned Aerial Vehicles (UAVs or commonly known as drones) have the potential benefits in improving communications in wireless network environment. Besides their utilization in military operations, the potential use and applicability of UAVs in civilian applications is becoming a fast growing phenomenon. Routing is one of the vital aspects while conducting wireless communication among UAVs. It is extremely challenging because of dynamic topology changes resulted from frequent mobility of UAVs. In this paper, we analyse the performance of different Mobile Ad-hoc Network (MANET) routing protocols for the communication of UAVs. Using Riverbed (OPNET) Modeler, we evaluate the performance of four MANET routing protocols (AODV, DSR, GRP and OLSR) for UAV communications based on scenarios of various data rates supported by IEEE 802.11p (WAVE) standard. <s> BIB021 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Nowadays, mini-drones, officially called unmanned aerial vehicles, are widely used in many military and civilian fields. Compared to traditional ad hoc networks, the mobile ad hoc networks established by UAVs are more efficient in completing complex tasks in harsh environments. However, due to the unique characteristics of UAVs (e.g., high mobility and sparse deployment), existing protocols or algorithms cannot be directly used for UAVs. In this article, we focus on the routes designed for UAVs, and aim to present a somewhat complete survey of the routing protocols. Moreover, the performance of existing routing protocols is compared in detail, which naturally leads to a great number of open research problems that are outlined afterward. <s> BIB022 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> Abstract UAV networks have been in the spotlight of the research community on the last decade. One of the civil applications in which UAV networks may have more potential is in emergency response operations. Having a UAV network that is able to deploy autonomously and provide communication services in a disaster scenario would be very helpful for both victims and first responders. However, generating exploratory trajectories for these networks is one of the main issues when dealing with complex scenarios. We propose an algorithm based on the well-known Particle Swarm Optimization algorithm, in which the UAV team follows the networking approach known as Delay Tolerant Network. We pursue two main goals, the first is exploring a disaster scenario area, and the second is making the UAVs converge to several victims groups discovered during the exploration phase. We have run extensive simulations for performing a characterization of the proposed algorithm. Both goals of the mission are successfully achieved with the proposed algorithm. Besides, in comparison to an optimal trajectory planning algorithm that sweeps the entire disaster scenario, our algorithm is able to discover faster the 25%, 50% and 75% of the scenario victims and it converges faster. In addition, in terms of connections events between a victim and a UAV, our algorithm shows more frequent connections and less time between consecutive connections. <s> BIB023 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> I. INTRODUCTION <s> The location-routing problem (LRP) of unmanned aerial vehicles (UAV) in border patrol for Intelligence, Surveillance, and Reconnaissance is investigated, where the locations of UAV base stations and the UAV flying routes for visiting the targets in border area are jointly optimized. The capacity of the base station and the endurance of the UAV are considered. A binary integer programming model is developed to formulate the problem, and two heuristic algorithms combined with local search strategies are designed for solving the problem. The experiment design for simulating the distribution of stations and targets in border is proposed for generating random test instances. Also, an example based on the practical border in Guangxi is presented to illustrate the problem and the solution approach. The performance of the two algorithms is analysed and compared through randomly generated instances. <s> BIB024
The rapid deployment of low-cost Wi-Fi radio interfaces, global positioning system (GPS), sensors, and embedded microcomputers has enabled unmanned aerial vehicles (UAVs) to be extensively used for various applications in the military and civilian domains. Examples of military applications are public protection and disaster relief operations BIB023 , surveillance and reconnaissance BIB003 , border supervising BIB015 , autonomous tracking BIB013 , managing wildfire BIB008 , homeland security BIB024 , wind estimation BIB011 , remote sensing BIB012 , traffic monitoring BIB005 , and relays for ad hoc networks BIB009 . In addition to military and public domains, there are also numerous commercial applications for UAVs, such as film making , farming , Internet delivery BIB016 , goods transportation , and architecture surveillance . Nokia deployed a 2 kg weight ultra-mini 4G mobile base station, which was successfully mounted on a commercial quadcopter to provide coverage over a remote area in Scotland . Similarly, Amazon designed a small drone named Amazon Prime Air [17] to deliver customer parcels safely within 30 minutes. The associate editor coordinating the review of this manuscript and approving it for publication was Dongxiao Yu. Deploying a large number of drones introduces challenges, such as ensuring collision-free and seamless operation of the drones. UAVs can be categorized into four types based on their cruise durations, and action radius: high-range UAVs operating at high altitude, with long endurance; mediumrange UAVs with action radii between 700 and 1000 km; lowcost, and small short-range UAVs with action radii less than 350 km, and flight spans less than 3 km; and mini drones with limited cruising speeds of 10 to 30 km/h, and cruising durations of less than 1 h. For proper cooperation and collaboration between multiple UAVs, inter-UAV wireless communication is necessary for forming a UAV network, or a flying ad hoc network (FANET). The UAV is also called drone, and thus the three terminologies, UAV network, FANET, and drone ad hoc network, are interchangeably used. There are two types of UAV networks, as shown in Fig. 1 . In a single-UAV network, the UAV is linked to a ground base station, or to a satellite. In a multi-UAV network, multiple UAV devices are linked to each other in addition to the ground base station, or satellite. The UAVs in a multi-UAV network can be configured dynamically in different topologies from time to time. The UAV to UAV connection and the UA to ground base station connection are called U2U link and U2G link, respectively. A routing protocol is essential for the transmission of packets between UAV nodes; however, there are challenges associated with developing it. One of them is the highly dynamic topology of UAV networks, which means that UAV links may be frequently disconnected. Another challenge is the range restriction between the UAVs and the base station. Therefore, high mobility, dynamic topology, and uneven UAV distributions make the development of a routing protocol ensuring reliable communication difficult in UAV networks BIB014 , BIB004 . In some situations, UAV networks may be noticed as a special form of mobile ad hoc networks (MANETs), and vehicular ad hoc networks (VANETs). Recently, significant research work has been done by applying existing ground networks, such as VANETs, to UAV networks BIB020 , . However, the rapid mobility and highly dynamic topology in UAV networks make the adaptation challenge, thus limiting network performance and dependability. Several approaches and contributions have been proposed, particularly those based on geographical position during the last few years. These routing protocols have been designed based on frequent disconnections between the UAV nodes owing to their high mobility. In addition, the routing protocols BIB010 , BIB021 designed for MANETs can be improved by including the unique functions and characteristics of UAV networks. Different mobility in a multi-UAV environment requires highly accurate localization with short update intervals. GPS provides node position information at one-second intervals, which may not be satisfactory for UAV routing protocols. In such case, an inertial measurement unit was considered, which can be calibrated by the GPS signal to provide the position of the UAV at a quicker rate BIB001 , BIB002 . Some researchers proposed differential GPS (DGPS), or assisted GPS by using ground-based reference methods for range corrections with the accuracy of about 10 m BIB006 , BIB007 . In the last few years, comprehensive surveys on UAV routing protocols have been reported BIB017 - BIB022 as summarized in Table 1 -that provide information on the general issues in UAV networks, such as in applications, communication protocols, and routing techniques. Of them, only few survey articles BIB018 , BIB019 provide details on UAV routing techniques, e. g., BIB018 reviews a few well-known topology-based routing protocols; BIB019 surveys a few position-based routing protocols; i. e., they do not provide the details on all routing protocols, such as topology-based, and cluster-based routing protocols. The objective of this study is to survey the routing protocols applicable to UAV networks, where position-based, topology-based, cluster-based, deterministic, stochastic, and social-network-based routing protocols are extensively reviewed. In our work, we introduce a comprehensive survey of 21 topology-based routing protocols, 22 positionbased routing protocols, 5 cluster-based routing protocols, 6 different data forwarding-based routing protocols, and 6 field experiments of routing protocols in UAV networks and FANETs with their various categories. After discussing network architecture, various routing techniques, and taxonomy of routing protocols in UAV networks, we compare the routing protocols qualitatively in terms of their major features, characteristics, and performance. Then, we address important open issues and research challenges in designing routing protocols for UAV networks. To the best of our knowledge, this is the first article that studies all categories of routing protocols reported in the literature. In the recent time, some survey papers reviewed routing protocols in UAV networks as summarized in Table 1 . However, none of them has focused on all categories of VOLUME 7, 2019 routing protocols in UAV networks. Most of the survey papers in the literature focus neither on position-based routing nor on topology-based routing. In our work, we deeply describe and compare all categories of routing protocols including position-based, topology-based, cluster-based, deterministic, stochastic, and social-network-based routing in UAV networks. Each category of routing protocols are also classified into different sub-categories. In addition, we include recent routing protocols from the literature. The rest of this paper is organized as follows. In the following section, the design considerations for UAV networks are summarized. In Section III, UAV network architectures and communication issues are addressed. In Section IV, the routing protocols for UAV networks are presented, in which topology-based, position-based, hierarchical, deterministic, stochastic, and social-network-based routing protocols are extensively surveyed. In Section V, the different routing protocols are qualitatively compared in terms of their characteristics and performance. In Section VI, important open issues and research challenges are discussed. Finally, the paper is concluded in Section 7.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> B. MOBILITY <s> Clustering is an important research topic for mobile ad hoc networks (MANETs) because clustering makes it possible to guarantee basic levels of system performance, such as throughput and delay, in the presence of both mobility and a large number of mobile terminals. A large variety of approaches for ad hoc clustering have been presented, whereby different approaches typically focus on different performance metrics. This article presents a comprehensive survey of recently proposed clustering algorithms, which we classify based on their objectives. This survey provides descriptions of the mechanisms, evaluations of their performance and cost, and discussions of advantages and disadvantages of each clustering scheme. With this article, readers can have a more thorough and delicate understanding of ad hoc clustering and the research trends in this area. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> B. MOBILITY <s> Copyright © 2007 by D. Jung and P. Tsiotras. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> B. MOBILITY <s> Unmanned air vehicles (UAVs) can provide important communication advantages to ground-based wireless ad hoc networks. In this paper, the location and movement of UAVs are optimized to improve the connectivity of a wireless network. Four types of network connectivity are quantified: global message connectivity, worst-case connectivity, network bisection connectivity, and k-connectivity. The problems of UAV deployment and movement are formulated to improve the different types of connectivity. Both problems are NP-hard. For the deployment case, some heuristic adaptive schemes are proposed to yield simple but effective solutions. In addition, a closed-form solution for the two-node one-UAV case is provided. For k-connectivity, we propose an algorithm that improves connectivity using Delaunay triangulation. To optimize the UAV movement, an algorithm that tracks changes in the network topology is constructed. The simulation results show that by only deploying a single UAV, the global message network connectivity and the worst-case network connectivity can be improved by up to 109% and 60%, respectively. The network bisection connectivity and the k -connectivity can also be significantly improved. <s> BIB003 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> B. MOBILITY <s> The security of private information of users online is a critical topic, particularly since social networking applications became popular. According to Cutillo et al. [1], beyond the usual vulnerabilities that threaten any distributed application over the Internet, online social networks raise specific privacy concerns due to their inherent handling of personal data. <s> BIB004 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> B. MOBILITY <s> An unmanned aerial ad hoc network (UAANET) is a special type of mobile ad hoc network (MANET). For these networks, researchers rely mostly on simulations to evaluate their proposed networking protocols. Hence, it is of great importance that the simulation environment of a UAANET replicates as much as possible the reality of UAVs. One major component of that environment is the movement pattern of the UAVs. This means that the mobility model used in simulations has to be thoroughly understood in terms of its impact on the performance of the network. In this paper, we investigate how mobility models affect the performance of UAANET in simulations in order to come up with conclusions/recommendations that provide a benchmark for future UAANET simulations. To that end, we first propose a few metrics to evaluate the mobility models. Then, we present five random entity mobility models that allow nodes to move almost freely and independently from one another and evaluate four carefully-chosen MANET/UAANET routing protocols: ad hoc on-demand distance vector (AODV), optimized link state routing (OLSR), reactive-geographic hybrid routing (RGR) and geographic routing protocol (GRP). In addition, flooding is also evaluated. The results show a wide variation of the protocol performance over different mobility models. These performance differences can be explained by the mobility model characteristics, and we discuss these effects. The results of our analysis show that: (i) the enhanced Gauss–Markov (EGM) mobility model is best suited for UAANET; (ii) OLSR, a table-driven proactive routing protocol, and GRP, a position-based geographic protocol, are the protocols most sensitive to the change of mobility models; (iii) RGR, a reactive-geographic hybrid routing protocol, is best suited for UAANET. <s> BIB005 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> B. MOBILITY <s> Flying ad-hoc networks are becoming a promising solution for different application scenarios involving unmanned aerial vehicles, like urban surveillance or search and rescue missions. However, such... <s> BIB006
A mobility model is necessary for efficient communication between UAVs; it captures their trails and speed deviations. In UAV networks, mobility models are applicationdependent. In the case of some multi-UAV systems, global path plans are preferred for the UAVs. In this case, the movement of the UAVs is predefined, and the mobility model is regular. However, multi-UAV systems also work autonomously, where the path is not predefined. Mobility models also depend on the type of UAV considered. UAVs are categorized as large UAVs, small UAVs, and mini-UAVs BIB005 . For the motion of UAVs performing autonomous military operations in groups without a centralized control as a point of reference, group mobility model (RPGMM) can be the best choice. An example of RPGMM is Manhattan grid mobility model, which can be used to emulate a map-based approach while considering the geographic restrictions of the UAVs BIB004 . For patrolling applications, where UAVs can adopt flexible trajectories, other models, such as random waypoint mobility model can be used BIB006 . In yet another model, the Gauss-Markov mobility model, the movement of UAVs depends on previous speed and directions that assist UAVs in relaying networks BIB004 . Node mobility is a significant issue in UAV networks, as well as other ad hoc networks. Compared to VANET nodes, MANET nodes are relatively slow. The mobility of FANET nodes are higher than that of both VANET and MANET BIB002 . All UAV nodes are highly mobile, with speeds ranging from 30 to 460 km/h BIB003 . This results in fluctuations in the wireless link, and, as such, the efficiency of routing techniques varies on the speed of the UAVs. A routing layer ensures endto-end delivery, and medium access control (MAC) ensures the quality of service (QoS) for one-hop transmission BIB001 .
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> C. LATENCY <s> The unmanned aerial system (UAS), as a typical aeronautical Ad-hoc network (AANET) system, which is composed of unmanned aerial vehicle clusters has a lot of advantages such as flexible deployment, excellent cost-benefit ratio and free from the effect of environment. It will be an important part of the space-sky information network in the future. Due to the fast-moving of nodes in AANET/UAS, the topology changes frequently. The routing overhead becomes huge and inefficient, and connectivity keeping becomes difficult. In this paper, a shortest path routing algorithm based on grid position no center(GPNC-SP algorithm) is proposed, which uses the logical grid distance to replace the original Euclidean distance to reduce the sensitivity of fast-moving nodes. This algorithm automatically computes and maintains the adjacency relationship and topology structure by perception and updating algorithm and adopts Dijkstra algorithm to achieve the shortest routing path. Also, a regional reconstruction strategy(RSS) is designed to optimize the routing path dynamically. At the same time, two metrics, i.e., the percentage of the effective communication area(Peca%) and the sensitivity with logical grid size (Sg) are used to determine the optional scope of logical grid width. Comparing with the two traditional typical routing protocols (DREAM and DSDV algorithms), this algorithm sacrifices a certain degree of communication distance to achieve better performances of network overhead, link stability, and calculating speed. The final simulation experiments under the Matlab environment demonstrate the effectiveness and practicality of this algorithm. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> C. LATENCY <s> This paper aims at solving the end-to-end delay-constrained routing problem in a local way for flying ad hoc networks (FANETs). Due to the high mobility, it is difficult for each node in FANETs to obtain the global information. To solve this issue, we propose an adaptive delay-constrained routing with the aid of a stochastic model, which allows the senders to deliver the packets with only local information. We represent the problem in a mathematical form, where the effective transmission rate is viewed as the optimization objective and the link quality and end-to-end delay as the constraints. And, some mathematical tools are used to obtain the approximate solutions for the optimization problem. Before designing the routing scheme, the senders calculate the transition probability for its relay node by jointly considering local delay estimation and expected one-hop delay. Then, the sender transmits the packets to their relay node with transition probability. Finally, we prove the convergence of the proposed routing algorithm and analyse its performances. The simulation results show that the proposed routing policy can improve the network performance effectively in terms of throughput, loss rate, and end-to-end delay. <s> BIB002
Disaster monitoring, and search and destroy operations require minimal latency, as the information needs to be transmitted at very high rates. It is almost impossible to have a network without delay; however, latency in a network can be minimized and controlled within certain limits. The concept of priority schemes may also be used in UAV networks to control and minimize latency BIB001 . In addition, priority-based routing protocols can be used to achieve QoS for various message types. Coordination among UAVs, performing efficient collision control, and congestion control protocols also play vital roles . Therefore, choosing the best suitable routing protocol essential in controlling the latency and improving the QoS of UAV networks. In BIB002 , authors aim at solving the end-to-end delayconstrained routing problem in a local way for FANETs. Due to the high mobility, getting global information for each node is a difficult task. To resolve this issue, the authors in BIB002 designed an adaptive delay-constrained routing with the aid of a stochastic model, which allows senders to deliver packets with only local information.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> F. FLIGHT FORMATION <s> Three dimensional formation control problem of multi-UAV system with communication constraints of non-uniform time delays and jointly-connected topologies is investigated. No explicit leader exists in the formation team, and, therefore, a consensus-based distributed formation control protocol which requires only the local neighbor-toneighbor information between the UAVs is proposed for the system. The stability analysis of the proposed formation control protocol is also performed. The research suggests that, when the time delay, communication topology, and control protocol satisfy the stability condition, the formation control protocol will guide the multi-UAV system to asymptotically converge to the desired velocity and shape the expected formation team, respectively. Numerical simulations verify the effectiveness of the formation control system. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> F. FLIGHT FORMATION <s> For the high-speed flight of the UAVs cooperative formation, when a single UAV has occurred, it's necessary to exit the formation flight and to be close or super close to form new formation rapidly. A fast close cooperative formation controller is proposed to make up for lowing the fighting robustness, and overcoming the drawback of poor timeliness and analyzing the dynamic characteristic of UAV formation flight. Taking the external factors known into consideration, setting up for the longitude maneuver of nonlinear thrust vector and unsteady aerodynamic model, according to the formation velocity, flat tail rudder angle and thrust vector and pitch angle velocity as corresponding input commend signals for researching the dynamic characteristic of UAV formation flight. Meanwhile, analysis of formation flight distance error converges to a fixed value, cooperating flight stability well. The simulation results show that the controller can improve the speed of the close or super close to formation effectively, and maintain the stability of the formation flight, which provides a method of the close or super close formation flight controller design. <s> BIB002
In BIB001 , a new formation for flight control protocol in multi-UAV systems is introduced, which is based on the diverse and asymmetric delays and dynamically changing topologies. The authors in BIB001 proposed a consensus-based distributed formation control protocol to address the stability problem in the multi-UAV formation, which needs only the local neighbor-to-neighbor information between UAVs. The simulation result indicates that if the communication topology is jointly connected and the non-uniform delays satisfy the design requirements, then the multi-UAV system can shape the desired formation and maintain the expected velocity, heading angle, and expected flight path angle. However, the size and collision avoidance of UAVs are not considered in BIB001 . In BIB002 , authors presented multi-UAV cooperative formation at high-speed flight. During the high-speed flight, when a single UAV needs to exit from the flight formation or needs to join in the flight formation quickly, design controller faces colossal challenge. Both the inner loop controller and outerloop controller are based on nonlinear dynamic inversion control.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> A. UAV NETWORK ARCHITECTURE <s> Unmanned air vehicles (UAVs) can provide important communication advantages to ground-based wireless ad hoc networks. In this paper, the location and movement of UAVs are optimized to improve the connectivity of a wireless network. Four types of network connectivity are quantified: global message connectivity, worst-case connectivity, network bisection connectivity, and k-connectivity. The problems of UAV deployment and movement are formulated to improve the different types of connectivity. Both problems are NP-hard. For the deployment case, some heuristic adaptive schemes are proposed to yield simple but effective solutions. In addition, a closed-form solution for the two-node one-UAV case is provided. For k-connectivity, we propose an algorithm that improves connectivity using Delaunay triangulation. To optimize the UAV movement, an algorithm that tracks changes in the network topology is constructed. The simulation results show that by only deploying a single UAV, the global message network connectivity and the worst-case network connectivity can be improved by up to 109% and 60%, respectively. The network bisection connectivity and the k -connectivity can also be significantly improved. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> A. UAV NETWORK ARCHITECTURE <s> The cooperation between the primary and the secondary users has attracted a lot of attention in cognitive radio networks. However, most existing research mainly focuses on the single-hop relay selection for a primary transmitter-receiver pair, which might not be able to fully explore the benefit brought by cooperative transmissions. In this paper, we study the problem of multi-hop relay selection by applying the network formation game. In order to mitigate interference and reduce delay, we propose a cooperation framework FTCO by considering the spectrum sharing in both the time and the frequency domain. Then we formulate the multi-hop relay selection problem as a network formation game, in which the multi-hop relay path is computed via performing the primary player's strategies in the form of link operations. We also devise a distributed dynamic algorithm PRADA to obtain a global-path stable network. Finally, we conduct extensive numerical experiments and our results indicate that cooperative multi-hop relaying can significantly benefit both the primary and the secondary network, and that the network graph resulted from our PRADA algorithm can achieve the global-path stability. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> A. UAV NETWORK ARCHITECTURE <s> Summary ::: Integrated frameworks have extended the applications of networks beyond a simple data sharing unit. Simultaneously, operating networks can form a layered structure that can operate as homogeneous as well as dissociated units. Networks using unmanned aerial vehicles follow similar criteria in their operability. Unmanned aerial vehicles can act as single searching unit controlled by human or can form an aerial swarm that can fly autonomously with the capability of forming an aerial network. Such aerial swarms are categorized as aerial ad hoc networks. Cooperation amongst different networks can be realized using various frameworks, models, architectures and middlewares. Several solutions have been developed that can provide easy network deployment of aerial nodes. However, a combined literature is not present that provides a comparison between these approaches. Keeping this in view, various cooperative approaches for similar formation using aerial vehicles have been discussed in this paper. The detailed study and comparative analysis of these approaches have been included. Further, the paper also includes various software solutions and their comparisons based on common parameters. Finally, various open issues have been discussed that can provide insight of ongoing research and problems that are yet to be resolved in these networks. Copyright © 2016 John Wiley & Sons, Ltd. <s> BIB003
Nowadays, most public and civilian applications can be performed using multi-UAV networks. Most of these multi-UAV networks are small, and the UAVs work in coordination. Multi-UAV networks are comprised of several components, such as UAVs, and ground control systems; hence, the networks may have several layers, such as UAV-to-UAV communications, and UAV-to-ground communications. The key features of multi-UAV networks are reliability and survivability through the redundancy. Failure of a single UAV causes the network to reorganize and maintain communication via other nodes. UAV networks can be categorized based on their applications, which requires certain specifications, such as the degree of node mobility, network architecture, routing, and control. Depending on the scenario, a UAV may be used to form different networks. Latency, scalability, and adaptability are fundamental design issues in implementing UAV networks. Moreover, for mission-critical network designs, packet delay is a significant issue BIB002 , BIB003 . Furthermore, for large area coverage, the number of nodes is a key issue BIB001 . Fig. 2 shows the taxonomy of UAV networks.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> V. ROUTING PROTOCOLS FOR UAV NETWORKS <s> Unmanned Aerial Vehicles (UAVs or commonly known as drones) have the potential benefits in improving communications in wireless network environment. Besides their utilization in military operations, the potential use and applicability of UAVs in civilian applications is becoming a fast growing phenomenon. Routing is one of the vital aspects while conducting wireless communication among UAVs. It is extremely challenging because of dynamic topology changes resulted from frequent mobility of UAVs. In this paper, we analyse the performance of different Mobile Ad-hoc Network (MANET) routing protocols for the communication of UAVs. Using Riverbed (OPNET) Modeler, we evaluate the performance of four MANET routing protocols (AODV, DSR, GRP and OLSR) for UAV communications based on scenarios of various data rates supported by IEEE 802.11p (WAVE) standard. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> V. ROUTING PROTOCOLS FOR UAV NETWORKS <s> The ever-increasing demand for flexible and portable communications has led to a rapid evolution in networking between unmanned aerial vehicles (UAVs) often referred to as flying ad-hoc networks (FANETs). However, due to the exclusive characteristics of UAVs such as high mobility, frequent topology change and 3D space movement, make routing a challenging task in FANETs. Due to these characteristics, designing new routing protocols for FANETs is quite difficult. In the literature study of FANETs, a variety of traditional ad-hoc networking protocols have been suggested and tested for FANETs to establish an efficient and robust communication among the UAVs. In this context, topology-based routing is considered the most significant approach for solving the routing issues in FANETs. Therefore, in this article we specifically focus on topology-based routing protocols with the aim of improving the efficiency of the network in terms of throughput, end-to-end delay, and network load. We present a brief review of the most important topology-based routing protocols in the context of FANETs. We provide them with their working features for exchanging information, along with the pros and cons of each protocol. Moreover, simulation analyses of some of the topology-based routing protocols are also evaluated in terms of end-to-end delay, throughput and network load the using optimized network engineering tools (OPNET) simulator. Furthermore, this work can be used as a source of reference for researchers and network engineers who seek literature that is relevant to routing in FANETs. <s> BIB002
Several routing protocols have been proposed for UAV networks BIB001 , BIB002 - . Initially, MANET and VANET routing protocols were chosen for testbeds in UAV networks. Owing to the unique characteristics of UAVs, such as rapid topology changes and high mobility, however, the routing protocols designed for MANETs and VANETs were realized not to be suitable for UAV networks. Therefore, specific routing protocols for UAV networks have been proposed recently. UAV routing protocols are classified into two different sections: Network architecture based routing protocols and based on data forwarding. Network architecture based routing are classified into three different subsections such as topology-based, position-based and hierarchical routing protocol. Topology-based routing contains three types of routing such as mesh-based, tree-based, and hybrid. Tree-based routing can be further classified into source rooted and corerooted based routing. Source-rooted tree routing is multicast routing protocol, where the source node is the root of multicast trees and maintains the tree construction and distribution. In core-rooted tree routings, cores are nodes with special functions likes multicast data distribution and membership management. In mesh-based routing, packets are distributed among the all interconnected nodes in the mesh structure. Mesh building and route discovery process can be done in two ways such as broadcasting method are used to discovering routes and core point is used for mesh building. Performance of mesh routing is better than tree-based routing in high mobility network, and mesh-based routing provides alternate paths to forward data packets from the source to destination. To maintain and manage the routing topology mesh-based routing needs control packets, which makes routing overhead and resulting in power inefficiency. In source-based routing, intermediate nodes need not maintain up-to-date routing information to forward the packet. The major limitation of source routing is overhead. In source routing, routing table is long for a large network and for every packet needs to mention the entire route in the header file, which is the cause of waste of network bandwidth. In hop-by-hop routing, route to the destination is distributed in the next-hop. In hop-by-hop routing, when node receives a packet to the destination, it forwards the packet to the nearest next hop corresponding to the destination node. Hybrid routing protocols are combines of both tree and meshbased routing. Multiple routing paths are the major advantages of hybrid routing. In this section, we review the routing protocols applicable for UAV networks, in which topology-based, positionbased, hierarchical, deterministic, stochastic, and social network-based routing protocols are extensively surveyed. Fig. 12 shows the taxonomy of routing protocols for UAV networks.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> Wireless mesh networks (WMNs) are a type of radio-based network systems which require minimal configuration and infrastructure. One of the factors that influence the performance of WMNs is the underlying routing protocol used. Here, we propose one such routing protocol, called M-OLSR, which is a variant of OLSR, a traditional link-state protocol developed to meet the requirements of mobile adhoc networks MANETs. Our approach improves the throughput and packet delivery ratio, while minimizing routing overhead and delay, by choosing a stable path for packet forwarding through static routers. Simulation results demonstrate that, for WMNs, the M-OLSR outperforms OLSR in terms of throughput, packet delivery ratio, and routing overhead. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> A Mobile Ad-Hoc Network (MANET) is a popular type of wireless network that is formed by a collection of mobile nodes. Each node in such network has the capability to communicate with its neighbors and non-neighbors through a wireless medium without using any existing network infrastructure. Due to the lack of infrastructure, all nodes in Ad-Hoc network are designed to act as an end system and a router for other nodes. Traditionally, the dominant design methodology for the network protocols is based on an Open Systems Interconnection (OSI) reference model. This methodology divided the stack into seven layers where each layer operates independently. Due to the dynamics of the Unmanned Aerial Vehicle (UAV) Ad-Hoc network, the layered architecture is not flexible enough to achieve certain quality of services (QoS) required by some applications. To overcome the limitations of the layering technique, cross-layering approach is used in this paper so to adjust some key parameters in the first three layers based on the aircraft attitude variations (pitch, roll and yaw), hence we can enhance the performance of such networks. To that respect, directional antenna is going to be used by the UAVs to extend the coverage area and reduce the number of hops. Meanwhile, we designed a new Medium Access Control (MAC) scheme that adapts its parameters based on the channel bit error rate which is indeed affected by the new antenna system and aircraft attitude. we called this scheme Intelligent Medium Access Control Protocol for Unmanned Aerial Vehicle (IMAC UAV). As for the routing protocol, We developed the Optimized Link State Routing Protocol (OLSR) so that the decision for selecting the route will be based on a local profile that holds the gathered information from the first three layers. Our new scheme is called Directional Optimized Link State Routing Protocol (DOLSR). We proved by OPNET simulator that our proposed techniques gave better end-to-end delay than the IEEE802.11 standard. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> In this paper, we propose the integration of a cartography gathering scheme to enhance the capacity of the Optimized Link State Routing Protocol (OLSR) to properly track node movements in dynamic networks. We propose an improved version of OLSR called the Cartography Enhanced Optimized Link State Routing Protocol (CE-OLSR), a novel routing protocol designed for mobile multi-hop ad hoc networks. Our contribution is three fold. First, we propose an efficient network cartography collection scheme solely based on OLSR signaling traffic. We show that this cartography is much richer than the mere topology gathered by the seminal OLSR. Second, we designed an enhanced version of OLSR based on the collected cartography. We show that CE-OLSR insures a much better responsiveness and copes appropriately with the mobility of nodes. Third, we conduct an extensive set of simulations to compare the performance of our proposal against that of OLSR. Simulations results show that the proposed CE-OLSR outperforms greatly OLSR in terms of a much better route validity, a much higher throughput and a much lower average delay. For instance, at a speed of 20 m/s, CE-OLSR achieves a route validity beyond 93% while that provided by OLSR barely attains 30%. At high speeds, CE-OLSR delivers more than 3 times the throughput of OLSR with an average end to end delay 21 times smaller. As such, CE-OLSR stands out not only as an appropriate routing protocol for mobile multi-hop ad hoc networks, but also a viable protocol for the transport of time critical data. <s> BIB003 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> The usage of Unmanned Aerial Vehicles (UAVs) is increasing day by day. In recent years, UAVs are being used in increasing number of civil applications, such as policing, fire-fighting, etc in addition to military applications. Instead of using one large UAV, multiple UAVs are nowadays used for higher coverage area and accuracy. Therefore, networking models are required to allow two or more UAV nodes to communicate directly or via relay node(s). Flying Ad-Hoc Networks (FANETs) are formed which is basically an ad hoc network for UAVs. This is relatively a new technology in network family where requirements vary largely from traditional networking model, such as Mobile Ad-hoc Networks and Vehicular Ad-hoc Networks. In this paper, Flying Ad-Hoc Networks are surveyed along with its challenges compared to traditional ad hoc networks. The existing routing protocols for FANETs are then classified into six major categories which are critically analyzed and compared based on various performance criteria. Our comparative analysis will help network engineers in choosing appropriate routing protocols based on the specific scenario where the FANET will be deployed. <s> BIB004 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> Abstract Recently, Flying Ad-hoc Networks (FANETs), enabling ad-hoc networking between Unmanned Aerial Vehicles (UAVs) is gaining importance in several military and civilian applications. The sensitivity of the applications requires adaptive; efficient; delay bounded and scalable communication network among UAVs for data transmission. Due to communication protocol complexity; rigidity; cost of commercial-off-the-shelf (COT) components; limited radio bandwidth; high mobility and computational resources; maintaining the desired level of Quality of Service (QoS) becomes a daunting task. For the first time in this research we propose multicluster FANETs for efficient network management; the proposed scheme considerably reduces communication cost and optimizes network performance as well as exploit low power; less complex and low cost IEEE 802.15.4 (MAC) protocol for intercluster and intracluster communication. In this research both beacon enabled mode and beaconless modes have been investigated with Guaranteed Time Slots (GTS) and virtual Time Division Multiple Access (TDMA) respectively. The methodology plays a key role towards reserving bandwidth for latency critical applications; eliminate collisions and medium access delays. Moreover analysis ad-hoc routing protocols including two proactive (OLSR, DSDV) and one reactive (AODV) is also presented. The results shows that the proposed scheme guarantees high packet delivery ratios while maintaining acceptable levels of latency requirements comparable with more complex and dedicatedly designed protocols in literature. <s> BIB005 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> With small UAVs (unmanned aerial vehicles) or drones having become widely accessible, new prospects were opened for applying them in event tracking, search and rescue operations, disaster recovery, and goods delivery. Flying Ad hoc Networks (FANETs) is a subdivision of a prominent Mobile Ad hoc Network (MANET) paradigm. This paradigm deals with the multiple mini-UAVs deployment. FANETs are characterized by several specific features, when compared with other ad hoc network types. Moreover, they present some major challenges to the research community. One of them is the FANET routing protocol design. The protocol must be able to create an effective route between UAVs as well as to adjust it to the promptly changing topology in real time. In the present work, possible FANET application scenarios are described together with the existing mobility models for simulators, with special reference being made to flying devices and relevant realistic motion. Monitoring traffic is a potential field where FANETs may replace intensive labor and an involved observational infrastructure. All results were obtained experimentally. Simulation modeling based on ns-2 network simulator was applied as a research method. The study was made of AODV and OLSR routing protocols to choose the best one for mini-UAVs FANET. <s> BIB006 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> The article examines node connectivity as one of the most significant quality characteristics of any information and telecommunication network. The article is aimed to determine FANET parameters providing PDR (Packet Delivery Ratio) more than 90% for the specified area. All results were received on the basis of the experimental study. Imitation modeling based on an ns-2 network simulator was applied as a research method. Characteristics of the network based on the simulation model utilizing the AODV and OLSR routing protocols have been evaluated. The authors studied also the connectivity dependence on such network parameters as the nodes number and the transmission range. <s> BIB007 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: OPTIMIZED LINK ROUTING (OLSR) <s> With increasingly popular multi-sized unmanned aerial vehicles (UAVs), also referred to as drones, UAV Ad Hoc Networks (UANETs) play an essential role in the realization of coordinating the access of drones to controlled airspace, and providing navigation services between locations in the context of Internet-of-Drones (IoD). Because of the versatility, flexibility, easy installation and relatively small operating expenses of drones, UANETs are more efficient in completing complex tasks in harsh environments, e.g., search and destroy operations, border surveillance, disaster monitoring, etc. However, due to the high mobility, drastically changing network topology, and intermittently connected communication links, existing routing protocols and communication algorithms in Mobile Ad Hoc Networks and Vehicular Ad Hoc Networks cannot be directly applied in UANETs. In this paper, we propose a link-quality and traffic-load aware optimized link state routing protocol, also called LTA-OLSR, to provide efficient and reliable communication and data transmission in UANETs. A link quality scheme is proposed to differentiate link qualities between a node and its neighbor nodes by using the statistical information of received signal strength indication (RSSI) of received packets. A traffic load scheme is also proposed to assure a light load path by taking account of MAC layer channel contention information and the number of packets stored in the buffer. We evaluate the proposed schemes through extensive simulation experiments using OMNeT++ and compare their performance with the original OLSR and DSR protocols. The simulation results indicate that the proposed routing protocol can be a viable approach in UAV Ad Hoc Networks. <s> BIB008
OSLR is one of the most used routing protocols in ad hoc networks. Multi-point relay (MPR) nodes contain the most important factors that affect the performance of OLSR. The function of the sender node is to select the MPR node, and this MPR node may cover two-hop neighbors BIB004 . HELLO messages are also used to find one hop and twohop neighbors. In UAV networks, nodes change the location and interconnection link frequently. A key feature to reduce control messages in OLSR is MPR-ing. MPR nodes are a subset of nodes responsible for forwarding link state updates. This optimization to pure link state routing protocol is useful in vastly dense network environments, where the MPR method is best employed. A good number of node change control messages need to be exchanged. For the control messages, overhead is created in the networks BIB005 . Based on the mechanism of OLSR, several new routing protocols have been proposed, such as D-OLSR BIB002 , M-OSLR BIB001 , and CE-OSLR BIB003 . In BIB006 , authors used OSLR routing for traffic monitoring in FANETs. The simulation results show that the OSLR routing protocol is not suited for highly dynamic and low-density FANETs networks due to large overhead. However, OSLR provides fast connection and less delay due to available routing information in the routing table. In BIB007 , the simulation results show that OLSR outperforms AODV in terms of data delivery in UAV networks. In BIB008 , authors proposed a link-quality and traffic-load aware optimized link state routing protocol called LTA-OLSR to provide efficient and reliable data transmission in UAV networks. A link quality scheme was designed to differentiate link qualities between a node and its neighbor nodes by using the statistical information of received signal strength indication (RSSI) of received packets. The authors also proposed a traffic load scheme to guarantee a light-load path by considering channel contention at the MAC layer and the number of packets stored in the buffer.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: AD-HOC ON-DEMAND DISTANCE VECTOR (AODV) <s> Conventional ad hoc routing protocols face challenges in airborne network due to aircraft movement, which often results in intermittent links and can cause dramatic topology changes. In this paper, we propose a cluster-based reactive routing protocol to alleviate these problems. Our solution takes advantage of mesh routers installed in unmanned aerial vehicles (UAVs) or aircraft capable of hovering, when such airborne assets are available. As those mesh points usually have relatively stable connections among themselves, they play the role of cluster heads, forming a hierarchical routing structure. A simple self-organizing rule is introduced in cluster management to limit the cluster control overhead and route discovery flooding. In addition, a disruption tolerant mechanism (DTM) can be deployed in the routing protocol to increase resilience to temporary link or node failure. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: AD-HOC ON-DEMAND DISTANCE VECTOR (AODV) <s> Node connectivity is examined in the article as one of the most important characteristics of any information and telecommunication network quality. The article is devoted to determining the FANET parameters PDR (Packet Delivery Ratio) more than 90% at the specified territory. All results were obtained in the course of the experimental study. Simulation modeling was as a research method and it was based on ns-2 network simulator. Characteristics of the network developed according to the simulation model using the AODV routing protocol have been specified. The connectivity dependence on such network parameters as the number of nodes and the transmission range has been examined. Analysis results of the routes lengths depending on the parameters are presented. <s> BIB002
AODV is a hop-by-hop reactive routing protocol, as shown in Fig. 15 . It determines the route from source to destination only when the source initiates, and keeps it as long as the source wishes. To discover the destination, a route request (RREQ) packet is broadcasted by the source node BIB001 . The function of intermediate nodes is not only forward RREQ but also update themselves with the source information. A route reply (RREP) packet contains the number of hops required to reach the destination. A route error packet (RERR) message is generated in case of an invalid route to inform the source node about the link failure so that that source can restart the route discovery. AODV adjusts the dynamic link condition, memory overhead, and low network utilization. Because of searching for the new destination, it suffers a latency issue. In BIB002 , authors used AODV routing in FANETs to examine the network connectivity. The simulation results show that the AODV protocol adapts quickly to changing network connection with a low network overhead and it achieves good network connectivity with high packet delivery ratio.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> b: CONNECTED-BASED TRAFFIC DENSITY AWARE ROUTING PROTOCOL (CRUV) <s> Routing in vehicular Ad hoc networks is a challenging task due to the high mobility of nodes and the network fragmentations. The challenge is most important in urban environment where many constraints are added like multipath and the presence of obstacles. In this way, many protocols are proposed. In this paper, we introduce a new position-based routing scheme called IRTIV, conceived exclusively for inter-vehicle communication in a city environment, the proposed protocol aims to find the shortest connected path to the destination by taking into account the real time traffic variation which is estimated by a completely distributed manner based on the periodic exchange of Hello messages. Simulation results show that the proposed protocol increases the packet delivery ratio and reduces the end to end delay. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> b: CONNECTED-BASED TRAFFIC DENSITY AWARE ROUTING PROTOCOL (CRUV) <s> The advent of commercial drones or unmanned areal vehicles (UAV) is expected to facilitate the deployment of a plethora of UAV-based applications. As a result, we expect UAVs to cooperate in ad hoc fashion and joint existing vehicular ad hoc networks. In this paper, we propose a novel routing scheme for Vehicular Ad hoc Networks (VANets) by using unmanned aerial vehicles (UAVs) through cooperative and collaborative communication. Our approach is based on information exchange between vehicles and UAVs to help vehicles in the ground find the best multi-hop path by selecting the most appropriate next intersection to deliver the data packets successfully to their destinations. We use the real time traffic variation which is estimated with a completely distributed manner based on the periodic exchange of Hello messages between all vehicles and UAVs in the system. In this work, we aim at integrating UAV-to-Vehicle (U2V) and Vehicle-to-UAV (V2U) communications to make routing in the presence of UAVs more efficient and more reliable, and to ensure packet delivery with a minimum of packet losses. This protocol is beneficial to develop more intelligent connected nodes in the future. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> b: CONNECTED-BASED TRAFFIC DENSITY AWARE ROUTING PROTOCOL (CRUV) <s> Routing in Vehicular Ad hoc Networks (VANETs) is an important factor to ensure a reliable and efficient delivery of data packets. In urban environments, routing protocols must efficiently handle the constantly changing network topology and frequent disconnections due to the high mobility and direction changes of vehicles. The challenge is greater when there are traffic lights fixed along intersections which affect directly the mobility and therefore can greatly impact routing in urban areas. In our previous work [1] we have proposed IRTIV (Intelligent Routing protocol using real time Traffic Information in urban Vehicular environment) that takes into account the real time traffic variation without any use of pre-installed infrastructures or additional messages. However, IRTIV does not take into consideration the traffic lights impact. In this paper, we propose ETAR (Efficient Traffic Light Aware Routing Protocol for Vehicular Networks). This protocol's objective is to find the most stable path for delivering data packets based on traffic lights and traffic density of vehicles using the periodical exchange of Hello messages. We present simulation-based performance results, which show that the proposed protocol increases the packet delivery ratio and reduces the end-to-end delay. <s> BIB003
CRUV proposed by Oubbati et al. BIB002 performed better than BIB001 , BIB003 . HELLO packets exchanged between the vehicles are periodic. These exchanges are used to find the most connected segments among the neighboring segments. For efficient routing decisions, UAVs exchange this information with the other nodes. If there is a connected segment, the source vehicle selects the UAV to which to deliver the data. Fig. 18 shows the source-to-destination vehicular data packet-sending scenario. The major advantage of CRUV is that when the current vehicle cannot find the connected segment, UAVs assist it by finding a connected segment for it.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 2) SINGLE-PATH DELAY-TOLERANT NETWORK ROUTING (DTN) <s> With the increasing importance of networked systems for telemetry, there is a need for efficient routing algorithms in aeronautical environments. Unlike traditional mobile networks, the highly dynamic nature of airborne networks results in extremely short-lived paths, especially for multi-hop scenarios thereby necessitating domain-specific protocols. In this paper, we present the detailed design and evaluation of AeroRP, a cross-layered routing protocol designed specifically for airborne telemetry applications. AeroRP exploits the broadcast nature of the wireless medium along with the physical node location and trajectory to improve the data delivery in Mach-speed mobile scenarios. We present a multi-modal protocol that addresses various operational scenarios of test and telemetry networks. Preliminary simulation results show that AeroRP significantly outperforms traditional MANET routing protocols while limiting the overhead. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 2) SINGLE-PATH DELAY-TOLERANT NETWORK ROUTING (DTN) <s> Aircraft ad hoc networks are special network models of mobile ad hoc networks where nodes can move anywhere and at anytime, and can construct networks in a self-organizing way. However, these networks show different features from typical mobile ad hoc networks, which represent the mobility pattern of vehicle or people. Thus, it is required to develop a new routing protocol to support mobility of aircraft in several aspects. In this paper, we propose a new geographic routing protocol that can cope with dynamic topology changes adaptively. The revised protocol makes use of mobility information, which is updated frequently by the base station on the ground. Another contribution is to develop a forwarding mechanism, which makes decisions for the next hop by using three-dimensional geographic information. Finally, simulation model and analysis are provided to validate the high packet delivery ratio with help from the information on infrastructure networks and intelligent forwarding scheme. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 2) SINGLE-PATH DELAY-TOLERANT NETWORK ROUTING (DTN) <s> Combining mobile platforms such as manned or unmanned vehicles and peer-assisted wireless communication is an enabler for a vast number of applications. A key enabler for the applications is the routing protocol that directs the packets in the network. Routing packets in fully connected mobile ad hoc networks (MANETs) has been studied to a great extent, but the assumption on full connectivity is generally not valid in a real system. This case means that a practical routing protocol must handle intermittent connectivity and the absence of end-to-end connections. In this paper, we propose a geographical routing algorithm called location-aware routing for delay-tolerant networks (LAROD), enhanced with a location service, location dissemination service (LoDiS), which together are shown to suit an intermittently connected MANET (IC-MANET). Because location dissemination takes time in IC-MANETs, LAROD is designed to route packets with only partial knowledge of geographic position. To achieve low overhead, LAROD uses a beaconless strategy combined with a position-based resolution of bids when forwarding packets. LoDiS maintains a local database of node locations, which is updated using broadcast gossip combined with routing overhearing. The algorithms are evaluated under a realistic application, i.e., unmanned aerial vehicles deployed in a reconnaissance scenario, using the low-level packet simulator ns-2. The novelty of this paper is the illustration of sound design choices in a realistic application, with holistic choices in routing, location management, and the mobility model. This holistic approach justifies that the choice of maintaining a local database of node locations is both essential and feasible. The LAROD-LoDiS scheme is compared with a leading delay-tolerant routing algorithm (spray and wait) and is shown to have a competitive edge, both in terms of delivery ratio and overhead. For spray and wait, this case involved a new packet-level implementation in ns-2 as opposed to the original connection-level custom simulator. <s> BIB003
Owing to the high degrees of node mobility, some nodes may have to disconnect. DTN protocols handle technical issues, such as disconnection in the communication network. When connectivity is lost, these protocols use the storecarry-and-forward technique. This method stores the data packets until they meet with other nodes. The method sees decreased overhead because it does not use any additional control packets. The most used protocols are location-aware routing for opportunistic delay tolerance (LAROD) BIB003 ; aeronautical routing protocol (AeroRP) BIB001 proposed by Jabbar and Peter, et al., which is a geographical delay tolerant routing protocols; and geographic routing for aircraft ad hoc networks (GRAA) BIB002 , as proposed by Hyon et al., which is geographic routing protocol based on GPRS.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 4) SINGLE-PATH -GREEDY-BASED NON-DELAY TOLERANT ROUTING a: GEOGRAPHIC LOAD SHARE ROUTING (GLSR) <s> The Airborne Internet is a vision of a large-scale multihop wireless mesh network consisting of commercial passenger aircraft connected via long-range highly directional air-to-air radio links. We propose a geographic load sharing strategy to fully exploit the total air-to-ground capacity available at any given time. When forwarding packets for a given destination, a node considers not one but a set of next-hop candidates and spreads traffic among them based on queue dynamics. In addition, load balancing is performed among Internet Gateways by using a congestion-aware handover strategy. Our simulations using realistic North Atlantic air traffic demonstrate the ability of such a load sharing mechanism to approach the maximum theoretical throughput in the network. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 4) SINGLE-PATH -GREEDY-BASED NON-DELAY TOLERANT ROUTING a: GEOGRAPHIC LOAD SHARE ROUTING (GLSR) <s> Unmanned Aerial Vehicles (UAVs) play more and more important roles in modern warfare. However, the data routing for communication among UAVs faces several challenges, such as packet loss or routing path failure etc. The main problem of UAVs data routing is caused by the high mobility of UAVs. In this paper, an effective geographic mobility prediction routing protocol is proposed to improve the performance of routing among UAVs. First, a Gaussian distribution of UAVs movement probability density function is deduced to reduce the impact of high mobility. Then, two-hop perimeter forwarding is proposed to reduce the impact of routing void. The experiment results show that the proposed approach can provide effective and reliable data routing with acceptable communication overhead in the highly dynamic environment of Ad Hoc UAV Network. <s> BIB002
Geographic load share routing (GLSR) BIB001 is another geographic routing protocol for UAVs, proposed by Medina et al., which can use multiple paths at a time between the source, and the destination. In addition, Mobility predication-based geographic routing (MPGR) BIB002 based on geographic position for inter-UAV networks, proposed by Lin et al., uses the Gaussian distribution function to detect the mobility of UAVs.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 5) MULTI-PATH NON-DELAY-TOLERANT NETWORK ROUTING <s> Sustaining long link durations in highly mobile ad hoc networks presents a great challenge, mostly untreated in recent literature. In this paper we introduce a new routing algorithm based on the relative velocity of mobile nodes, which also incorporates Quality of Service (QoS), termed QoS Multipath Doppler Routing (QaS-MUDOR). The primary aim of QoS-MUBOR is to maintain long link durations, whilst meeting QoS constraints. The routing protocol proposed is based on data retrieval from nodes, where nodes act as content providers. This simulates scenarios such as downloading a file, a web page, or any form of data from other nodes which can provide it. We will show how utilizing the relative velocity of nodes using the Doppler shift subjected to packets assists in selecting stable paths, whilst maintaining the QoS requirements in highly mobile pseudo-linear systems such as an aeronautical ad hoc network. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 5) MULTI-PATH NON-DELAY-TOLERANT NETWORK ROUTING <s> Abstract Reactive-Greedy-Reactive (RGR) has been proposed as a promising routing protocol in highly mobile density-variable Unmanned Aeronautical Ad-hoc Networks (UAANETs). In RGR, location information of Unmanned Aerial Vehicles (UAVs) as well as reactive end-to-end paths are employed in the routing process. It had already been shown that RGR outperforms existing routing protocols in terms of packet delivery ratio. In this paper, the delay performance of RGR is evaluated and compared against Ad-hoc On-demand Distance Vector (AODV) and Greedy Geographic Forwarding (GGF).We considerextensive simulation scenariostocover both searchingand tracking applicationsofUAANETs. The results illustrate that when the number of UAVs is high enough in a searching mission to form a connected UAANET, RGR performs well. In sparsely connected searching scenarios or dense tracking scenarios, RGR may also slightly decrease delay compared to traditional reactive routing protocols for similar PDR. <s> BIB002
Aeronautical mobile ad hoc networks (ARPAM) proposed by Lordankis et al., which is based on the geographical position. Numerous Non-DTN routing protocols have been proposed, such as reactive greedy reactive (RGR) BIB002 ; multi-path Doppler routing (MUDOR) BIB001 , proposed by Sakhee et al., which uses the most stable path with the longest lifetime.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: REACTIVE GREEDY REACTIVE PROTOCOL (RGR) <s> As a result of high mobility of Unmanned Aerial Vehicles (UAVs), designing a good routing protocol is challenging for Unmanned Aeronautical Ad-hoc Networks (UAANETs). Geographic-based routing mechanisms are seen to be an interesting option for routing in UAANETs due to the fact that location information of UAVs is readily available. In this paper, a combined routing protocol, called the Reactive-Greedy-Reactive (RGR), is presented for UAANET applications, which combines the mechanisms of the Greedy Geographic Forwarding (GGF) and reactive routing. The proposed RGR employs location information of UAVs as well as reactive end-to-end paths in the routing process. Simulation results show that RGR outperforms existing protocols such as Ad-hoc On-demand Distance Vector (AODV) in search UAANET missions in terms of delay and packet delivery ratio, yet its overhead is similar to traditional mechanisms. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> a: REACTIVE GREEDY REACTIVE PROTOCOL (RGR) <s> Unmanned Aeronautical Ad Hoc Networks (UAANETs) are a type of Mobile Ad Hoc Networks (MANETs) which are infrastructureless and self-organizing networks. The specificity of UAANETs is that they are formed by small and medium sized Unmanned Aerial Vehicles (UAVs) also known as drones. In UAANETs as well as in MANETs, geographic routing is widely used. Geographic routing relies on Greedy Forwarding (GF), also called Greedy Geographic Forwarding (GGF). GGF fails when a packet arrives at a node that has no neighbor closer to the destination than it is. The node in this situation is referred to as a void node. In this paper, we propose a strategy that salvages packets in void node situations. We thereafter append this strategy to a protocol that features GGF. Simulations in OPNET show an increase in packet delivery ratio of about 2% at virtually no additional cost. <s> BIB002
RGR is widely used in UAV networks. If there is no route to the target destination, then the source node needs to begin an on-demand path to continue the communication with the target destination node. This routing protocol was proposed by Shirani et al., and it is based on the combination of topologybased, and classic delivery-based routing BIB001 . In topologybased routing, on-demand routing paths are created using AODV, and a classic delivery-based path is based on greedy geographic forwarding (GGF) BIB002 .
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 7) MULTI-PATH DTN ROUTING a: LOCATION-AIDED DELAY TOLERANT ROUTING (LADTR) <s> Low-altitude unmanned aerial vehicles (UAVs) combined with WLAN mesh networks (WMNs) have facilitated the emergence of airborne network-assisted applications. In disaster relief, they are key solutions for 1) on-demand ubiquitous network access and 2) efficient exploration of sized areas. Nevertheless, these solutions still face major security challenges as WMNs are prone to routing attacks. Consequently, the network can be sabotaged, and the attacker might manipulate payload data or even hijack the UAVs. Contemporary security standards, such as the IEEE 802.11i and the security mechanisms of the IEEE 802.11s mesh standard, are vulnerable to routing attacks as we experimentally showed in previous works. Therefore, a secure routing protocol is indispensable for making feasible the deployment of UAV-WMN. As far as we know, none of the existing research approaches have gained acceptance in practice due to their high overhead or strong assumptions. Here, we present the position-aware, secure, and efficient mesh routing approach (PASER). Our proposal prevents more attacks than the IEEE 802.11s/i security mechanisms and the well-known, secure routing protocol ARAN, without making restrictive assumptions. In realistic UAV-WMN scenarios, PASER achieves similar performance results as the well-established, nonsecure routing protocol HWMP combined with the IEEE 802.11s security mechanisms. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> 7) MULTI-PATH DTN ROUTING a: LOCATION-AIDED DELAY TOLERANT ROUTING (LADTR) <s> Wireless communication is essential for search and rescue operations in the aftermath of natural disasters. In post-disaster scenarios, unmanned aerial vehicles (UAVs) can be used to capture image and video data from the disaster area and transfer the data to a ground station, owing to their rapid mobility. However, packet forwarding is a challenge because of unstable links and intermittent connectivity in highly dynamic UAV networks. In this paper, we propose a location-aided delay tolerant routing (LADTR) protocol for UAV networks for use in post-disaster operations, which exploits location-aided forwarding combined with a store–carry–forward (SCF) technique. Ferrying UAVs are introduced to enable an efficient SCF, and this is the first attempt at introducing and using ferrying UAVs for routing in UAV networks, to the best of our knowledge. Ferrying UAVs improve the availability of connection paths between searching UAVs and the ground station, thus reducing end-to-end delays and increasing the packet delivery ratio. Future UAV locations are estimated based on the location and speed of UAVs equipped with a global positioning system. The forwarding UAV node predicts the position of the destination UAV node and then decides where to forward. The proposed LADTR ensures that the contact rate between UAV nodes remains high, which enables a high packet delivery ratio, and ensures single-copy data forwarding to avoid replication of each message. Our performance study shows that the proposed LADTR outperforms the four typical routing protocols reported in the literature in terms of packet delivery ratio, average delay, and routing overhead. <s> BIB002
In BIB002 , authors presented a location-aided delay tolerant routing protocol (LADTR) in UAV networks for post-disaster operation. In LADTR, store-carry-forward (SCF) technique VOLUME 7, 2019 is exploited for DTN forwarding. Ferrying UAVs with SCF is introduced in LADTR, which increase the performance of the routing protocol. Ferrying UAVs also increase the number of node connection paths between the searching UAVs and the ground station, as illustrated in Fig. 23 . It is shown in BIB001 that the packet delivery ratio, average end-to-end delay, and overhead of LADTR are significantly improved in comparison to the conventional approaches.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> E. STOCHASTIC ROUTING PROTOCOLS <s> An ad hoc network is formed by a group of mobile hosts upon a wireless network interface. Previous research in communication in ad hoc networks has concentrated on routing algorithms which are designed for fully connected networks. The traditional approach to communication in a disconnected ad hoc network is to let the mobile computer wait for network reconnection passively. This method may lead to unacceptable transmission delays. We propose an approach that guarantees message transmission in minimal time. In this approach, mobile hosts actively modify their trajectories to transmit messages. We develop algorithms that minimize the trajectory modifications under two different assumptions: (a) the movements of all the nodes in the system are known and (b) the movements of the hosts in the system are not known. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> E. STOCHASTIC ROUTING PROTOCOLS <s> We formulate the delay-tolerant networking routing problem, where messages are to be moved end-to-end across a connectivity graph that is time-varying but whose dynamics may be known in advance. The problem has the added constraints of finite buffers at each node and the general property that no contemporaneous end-to-end path may ever exist. This situation limits the applicability of traditional routing approaches that tend to treat outages as failures and seek to find an existing end-to-end path. We propose a framework for evaluating routing algorithms in such environments. We then develop several algorithms and use simulations to compare their performance with respect to the amount of knowledge they require about network topology. We find that, as expected, the algorithms using the least knowledge tend to perform poorly. We also find that with limited additional knowledge, far less than complete global knowledge, efficient algorithms can be constructed for routing in such environments. To the best of our knowledge this is the first such investigation of routing issues in DTNs. <s> BIB002
Stochastic routing protocols are for networks where the network behavior is unknown and random . In this condition, packet delivery decision becomes important. One solution is to forward the data to the next node hoping that it is communication range. Here, historical data, mobility patterns, and other information are all considered for routing. It is a time-varying network topology protocol whose objective is to minimize the end-to-end delay by maximizing the probability of delivery at the destination. There are a few categories of stochastic routing protocols such as epidemic routing-based approach BIB002 , estimation-based routing , node movement and control-based routing BIB001 , and coding-based routing .
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> VI. COMPARISON OF ROUTING PROTOCOLS <s> The majority of work in cognitive radio networks have focused on single-hop networks with mainly challenges at the physical and MAC layers. Recently, multi-hop secondary networks have gained attention as a promising design to leverage the full potential of cognitive radio networks. One of the main features of routing protocols in multi-hop networks is the routing metric used to select the best route for forwarding packets. In this paper, we survey the state-of-the-art routing metrics for cognitive radio networks. We start by listing the challenges that have to be addressed in designing a good routing metric for cognitive radio networks. We then provide a taxonomy of the different metrics and a survey of the way they have been used in different routing protocols. Then we present a case study to compare different classes of metrics. After that, we discuss how to combine individual routing metrics to obtain a global one. We end the paper with a discussion of the open issues in the design of future metrics for cognitive radio networks. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> VI. COMPARISON OF ROUTING PROTOCOLS <s> Micro aerial vehicles (MAVs) provide data such as images and videos from an aerial perspective, with data typically transferred to the ground. To establish connectivity in larger areas, a fleet of MAVs may set up an ad-hoc wireless network. Packet forwarding in aerial networks is challenged by unstable link quality and intermittent connectivity caused by MAV movement. We show that signal obstruction by the MAV frame can be alleviated by adapting the MAV platform, even for low-priced MAVs, and the aerial link can be properly characterized by its geographical distance. Based on this link characterization and making use of GPS and inertial sensors on-board of MAVs, we design and implement a motion-driven packet forwarding algorithm. The algorithm unites location-aware end-to-end routing and delay-tolerant forwarding, extended by two predictive heuristics. Given the current location, speed, and orientation of the MAVs, future locations are estimated and used to refine packet forwarding decisions. We study the forwarding algorithm in a field measurement campaign with quadcopters connected over Wi-Fi IEEE 802.11n, complemented by simulation. Our analysis confirms that the proposed algorithm masters intermittent connectivity well, but also discloses inefficiencies of location-aware forwarding. By anticipating motion, such inefficiencies can be counteracted and the forwarding performance can be improved. <s> BIB002 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> VI. COMPARISON OF ROUTING PROTOCOLS <s> Communication in remote locations, specially in high-latitude regions, such as the Arctic, is challenged by the lack of infrastructures and by the limited availability of resources. However, these regions have high scientific importance and require efficient ways of transferring research data from different missions and deployed equipment. For this purpose, unmanned aerial vehicles (UAVs) can be used as data mules, capable of flying over large distances and retrieving data from remote locations. Despite being a well-known concept, its performance has not been thoroughly evaluated in realistic settings. In this paper, such a solution is evaluated through a field-experiment, exploiting the obtained results to define and implement an emulator for intermittent links. This emulator was designed as a mission planning tool, where we further analyze the impact of different flight trajectories when retrieving data. Additionally, we study the overall performance of 4 well-known file-transferring protocols suitable for a UAV being used as a data mule. Our analysis shows that trajectories at higher altitudes, despite increasing distance between nodes, improves communication performance. Moreover, the obtained results demonstrate that DTN2, using the bundle protocol, outperforms FTP, Rsync, and SCP, and that all these protocols are affected by the size of the files being transferred. These results suggest that, in order for the scientific community to practically use UAVs as data mules, further studies are required, namely on how different UAV trajectories can be combined with efficient file-transferring network protocols and well organized data structures. <s> BIB003 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> VI. COMPARISON OF ROUTING PROTOCOLS <s> In recent years, unmanned aerial vehicle(UAV) has been developing rapidly which is also widely concerned in the military field, but the research on the communication services for the battlefield is seriously lack of development. Multi-UAV cooperative applications have better performance than single UAV, but the cooperative communication problems between UAVs need to be solved urgently. Therefore, the research of UAV Ad Hoc networks becomes very important. In this paper, we carry out a series of experimental based on the experiment of multi-UAV Ad Hoc networks, and set up a wireless multi-hop Ad Hoc network to test the performance of multi-hop network, including network responding time, packet loss rate and network bandwidth capacity. At last we analyze the network performance and make some prospects for the future UAV Ad Hoc network experiment. <s> BIB004 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> VI. COMPARISON OF ROUTING PROTOCOLS <s> With the recent advances in unmanned aerial vehicles (UAVs), the development of energy-efficient networking technology for mission-oriented multiple cooperative UAVs has become crucial. Routing in flying ad-hoc networks (FANETs) with UAVs is a challenging issue because of the high speed and sudden changes in direction of UAVs. Traditional routing protocols in FANETs periodically send hello messages for the establishment and maintenance of the routes. However, sending hello messages periodically after a fixed interval increases bandwidth wastage when the hello interval is excessively short or causes long delays in neighbour discovery when the hello interval is overly long. Moreover, several disconnected UAV groups have been observed in which the group members are connected among themselves but detached from the main network. By exchanging excessive hello messages inside the group, the UAVs maintain an unnecessary neighbourhood, causing wastage of energy. However, FANETs have certain advantages, such as knowledge about mission-related information. To solve the problem of unnecessary energy drain, we propose a novel adaptive hello interval scheme—energy efficient hello (EE-Hello)—based on available mission-related information, such as the volume of the allowed airspace, number of UAVs, UAV transmission range, and UAV speed. We present a method to decide the distance that a UAV needs to travel before sending a hello message. We also specify a technique to determine the number of UAVs necessary to achieve specific network requirements, such as packet delivery ratio or throughput, with the expenditure of minimum energy. We show that the proposed EE-Hello can save about 25% of the energy currently used, by suppressing unnecessary hello messages without degrading the overall network throughput. <s> BIB005 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> VI. COMPARISON OF ROUTING PROTOCOLS <s> The collaboration of multiple unmanned aerial vehicles (UAVs) has stimulated the emergence of a novel wireless network paradigm named UAV network. UAV network, compared with uncoordinated UAV systems could provide wider coverage, better monitoring, and understanding of the interested area, and smarter decision-making. However, realizing the full potential of UAV network in dynamic environments poses great challenges in topology/flocking control, energy conservation, and quality of service guarantee. In this backdrop, this paper proposes a swarm intelligence-inspired autonomous flocking control scheme for UAV networks. First, based on the concept of intelligent emergence of swarm agents, a swarm intelligence-inspired multi-layer flocking control scheme is built for the flocking control problem. Second, an integrated sensing and communication method is put forward to regulate how a UAV can calculate its distances to its neighbors and its deflection angle. Finally, a series of experiments are conducted on our simulator developed on OMNeT++ and the flocking prototype to evaluate the effectiveness of the proposed scheme. The simulation and experimental results have shown that the proposed scheme could realize efficient flocking control with low energy consumption and satisfied the quality of service. <s> BIB006
Routing protocols use several routing metrics as the basis to decide on the routing from the source to the destination. Thus, routing metrics play a vital role in the quality of a routing path BIB001 . Comparing routing protocols shows that all routing protocols differ considerably from each other. UAV routing protocols are categorized via common parameters, such as packet delivery ratio (PAR), average end-toend delay, average number of hops, overhead, latency, and throughput. In addition, other properties, such as complexity, topology size, memory size, fault tolerance, bandwidth utilization, and applications need consideration. Some protocols support location detection GPS, where each node location can be identified. UAV nodes can also communicate with ground base stations to transfer certain information. In the evolution of routing protocols, various simulation tools have been used. Simulation tools are capable of simulating scenarios based on the application, and environments. Different metrics are used in the simulation of each routing protocol to analyze their behavior and compare with other routing protocols using the same performance metrics. Based on the simulations, it may be possible to analyze the strengths and limitations of the protocols. Table 4 summarizes our comparisons of the fundamental routing protocols in UAV networks. To Reiterating the discussion in the previous section, static protocols store the routing information before flight, proactive protocols use routing tables to store the route log, reactive protocols use source-based routing, hybrid protocols contain both proactive and reactive protocols, position-based protocols use GPS to find the geographical location, and hierarchical protocols are based on clustering. We also know that static and proactive routing protocols are suitable for small-area networks. Position-based and hierarchical protocols are more suitable for large-are networks. In addition, hierarchical protocols use less bandwidth compared to all other protocols. Moreover, complexity is also an issue. Static, proactive, and reactive protocols are less complex compared to position-based and hierarchical protocols. As shown in Tables 5 and 6 , UAV routing protocols are categorized according to their delivery approaches. Routing protocols use GPS to define the geographic location. Most topology-based routing protocols do not support GPS, whereas the location service is supported by all positionbased protocols. Position-based protocols used geographic position information for packet forwarding decisions, leaving the need for a routing table; the location of the neighboring nodes is sufficient to forward packets. This minimizes overhead and makes the routing protocols more scalable. Simulation tools are used for performance tests of the protocols, as well as validation. Different simulators use different techniques that vary from one protocol to another. The mobility generator and hence the mobility of UAV nodes are different for every simulator. In addition, different routing metrics are used by different simulators; although most consider packet delivery ratios, control of packets, broadcasting, disconnections of nodes, congestion, and overhead as key performance parameters. Tables 5 and 6 provide detailed comparisons of topology-and position-based routing protocols in UAV networks, respectively. As shown in Table 7 , few field experiment of UAV routing protocols are presented. In BIB002 , authors proposed a location-based delay tolerant routing protocol called DTN-GEO. Authors implemented routing algorithm test in field experiment. Authors used up to three quadcopters and one ground station and all copters generate data destined to the ground station. In BIB005 , authors propose a novel adaptive hello interval scheme called energy efficient hello (EE-Hello) based on available mission-related information, such as the volume of the allowed airspace, number of UAVs, UAV transmission range, and UAV speed. In BIB003 , authors used UAV based data mules called (UAV-DM); the performance has been thoroughly evaluated in realistic settings. In BIB004 , authors made a field experiment of multi-UAV Ad Hoc networks, and set up a wireless multihop ad hoc network called (MUAV-AD HOC) to test the performance of multi-hop network. In BIB006 , authors have tested swarm intelligence-inspired autonomous flocking control (SIIAFC) in UAV Networks, which is a distributed multi-layer flocking control scheme called SIMFC. SIMFC enables a follower node to autonomously follow the leader node and resolves the problems of multiple F-nodes collision avoidance. In , authors make a field test of multi-UAV routing (MUAV-R) for area coverage and remote sensing with minimum time. proposed yet are based on those for MANETs and VANETs. However, these are not inherently suitable for UAVs as routing protocols owing to the unique characteristics of UAVs and have therefore been unable to yield good performance in UAV networks. Hence, several keys issue that have not been solved yet for existing routing protocols. Most of the existing routing protocols have not functioned well with UAV networks and cannot provide the security requirements. In this subsection, we discuss some open research issues and challenges of routing protocols for UAV networks. The main routing challenges for UAV networks are frequent link failures, packet losses, limited bandwidth, high routing overhead, triggered routing table updates, and low convergence rate in networks. In this section, the challenging research issue that is the robustness and efficiency of routing in UAV networks is addressed. Five challenging issues are summarized here. It is hoped that researchers in the field will be interested in UAVs as a promising future technology.
Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> D. EVALUATION TOOLS <s> In recent years, unmanned aerial vehicle(UAV) has been developing rapidly which is also widely concerned in the military field, but the research on the communication services for the battlefield is seriously lack of development. Multi-UAV cooperative applications have better performance than single UAV, but the cooperative communication problems between UAVs need to be solved urgently. Therefore, the research of UAV Ad Hoc networks becomes very important. In this paper, we carry out a series of experimental based on the experiment of multi-UAV Ad Hoc networks, and set up a wireless multi-hop Ad Hoc network to test the performance of multi-hop network, including network responding time, packet loss rate and network bandwidth capacity. At last we analyze the network performance and make some prospects for the future UAV Ad Hoc network experiment. <s> BIB001 </s> Routing Protocols for Unmanned Aerial Vehicle Networks: A Survey <s> D. EVALUATION TOOLS <s> With the recent advances in unmanned aerial vehicles (UAVs), the development of energy-efficient networking technology for mission-oriented multiple cooperative UAVs has become crucial. Routing in flying ad-hoc networks (FANETs) with UAVs is a challenging issue because of the high speed and sudden changes in direction of UAVs. Traditional routing protocols in FANETs periodically send hello messages for the establishment and maintenance of the routes. However, sending hello messages periodically after a fixed interval increases bandwidth wastage when the hello interval is excessively short or causes long delays in neighbour discovery when the hello interval is overly long. Moreover, several disconnected UAV groups have been observed in which the group members are connected among themselves but detached from the main network. By exchanging excessive hello messages inside the group, the UAVs maintain an unnecessary neighbourhood, causing wastage of energy. However, FANETs have certain advantages, such as knowledge about mission-related information. To solve the problem of unnecessary energy drain, we propose a novel adaptive hello interval scheme—energy efficient hello (EE-Hello)—based on available mission-related information, such as the volume of the allowed airspace, number of UAVs, UAV transmission range, and UAV speed. We present a method to decide the distance that a UAV needs to travel before sending a hello message. We also specify a technique to determine the number of UAVs necessary to achieve specific network requirements, such as packet delivery ratio or throughput, with the expenditure of minimum energy. We show that the proposed EE-Hello can save about 25% of the energy currently used, by suppressing unnecessary hello messages without degrading the overall network throughput. <s> BIB002
A good number of simulation tools are used for routing protocol simulations of UAV networks and FANETs BIB002 - BIB001 . The majority of them do not show realistic or reasonable results. OPNET, NS-2, NS-3, and OMNET++ are the most used tools for performance measurement and evaluation of UAV routing protocols. These, however, do not support 3D communication and do not simulate any specified channels for communication between UAVs. Most simulators only support random mobility models, not predefined controlbased mobility. According to review papers, most of the researchers use personalized simulators that do not support reuse of the code. Therefore, a new simulator tool that supports more realistic mobility models and predefined mobility to obtain more reasonable and realistic outputs is needed to design protocols for UAV networks.
The state of web-based research: A survey and call for inclusion in curricula <s> <s> The World-Wide Web presents survey researchers with an unprecedented tool for the collection of data. The costs in terms of both time and money for publishing a survey on the Web are low compared with costs associated with conventional surveying methods. The data entry stage is eliminated for the survey administrator, and software can ensure that the data acquired from participants is free from common entry errors. Importantly, Web surveys can interactively provide participants with customized feedback. These features come at a price—ensuring that appropriately written software manages the data collection process. Although the potential for missing data, unacceptable responses, duplicate submissions, and Web abuse exist, one can take measures when creating the survey software to minimize the frequency and negative consequences of such incidents. <s> BIB001 </s> The state of web-based research: A survey and call for inclusion in curricula <s> <s> The Internet provides a new method to obtain subjects. Arguments supporting the use of the Internet as an experimental medium include the ease of data collection, limited resources needed, and ability to gain a sample more representative of the population. However, it has not been demonstrated that the Internet provides the conditions necessary to yield valid data. This paper compares results from experiments conducted over the Internet and in a laboratory. Correlating the means from the Internet samples to the laboratory samples on important interactions yielded values near 1.0, indicating that the two data sets are driven by the same psychological variables. Further evidence of the validity of these Web studies comes from regression analyses. However, more studies of this sort are needed to further identify what types of experiments may be validly conducted over the Internet. <s> BIB002
Psychology conference in Chicago. Most of these papers were published in the following May issue of Behavior Research Methods, Instruments & Computers, the name of the current journal until 2005. Some of the papers presented include papers examining how to do research on the web (e.g., BIB001 and two papers that actually reported the results of experiments conducted on the web in 1995 BIB002 . As such, web research has been conducted online for more than 20 years now and it seems a good time to examine the growth and health of the use of the web as a research tool. (See also in this issue for another perspective.)
The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> The Internet provides a new method to obtain subjects. Arguments supporting the use of the Internet as an experimental medium include the ease of data collection, limited resources needed, and ability to gain a sample more representative of the population. However, it has not been demonstrated that the Internet provides the conditions necessary to yield valid data. This paper compares results from experiments conducted over the Internet and in a laboratory. Correlating the means from the Internet samples to the laboratory samples on important interactions yielded values near 1.0, indicating that the two data sets are driven by the same psychological variables. Further evidence of the validity of these Web studies comes from regression analyses. However, more studies of this sort are needed to further identify what types of experiments may be validly conducted over the Internet. <s> BIB001 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Publisher Summary With each new technological advance comes a change in the methods and environments of doing psychological research. The Internet-based study adds a new tool for collecting subjects that goes well beyond using introductory psychology students, phone and mail surveys, and the exhausting effort of soliciting subjects from the community. By its very nature, the Web makes it easy to collect large samples. It allows more efficient examinations of person variables such as gender, ethnicity, country of origin, age, and language experience. For Web research, there are two primary ways to establish validity such as comparing results from a Web-based study to a laboratory based study, and examination of the research to see if the results follow theoretically predicted trends. However, there is a broader issue of validity that deals with the issue of the generalizability of the results. This conclusion is made because Web-based samples tend to be more diverse than most laboratory samples. With respect to issues of validity, Web-based research is even more complex than most methods. The Web allows basically any type of research design that is possible using computers as equipment and humans as subjects, including surveys, psychological tests, correlational designs, and experimental methods. In addition, the validity of Web-based research may vary for the different psychological variables studied. This chapter summarizes these results in an effort to determine whether the Internet and the World Wide Web are valid or invalid media for conducting research. <s> BIB002 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> The World Wide Web (WWW) provides a new tool for experimental research. The Web experiment method differs in fundamental aspects from traditional laboratory and field ::: experiments; therefore it can be used to validate previous findings. Web experiments offer (1) easy access to a demographically and culturally diverse participant population, including participants from unique and previously inaccessible target populations; (2) bringing the ::: experiment to the participant instead of the opposite; (3) high statistical power by enabling access to large samples; (4) the direct assessment of motivational confounding; and (5) cost savings of lab space, person-hours, equipment, and administration. These and 13 other advantages of Web experiments are reviewed and contrasted with 7 disadvantages, such as (1) multiple submissions, (2) lack of experimental control, (3) self-selection, and (4) drop out. ::: Several techniques and other detailed solutions are described that avoid potential problems or ::: even turn them into useful features of Web experimentation. <s> BIB003 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> I. BASIC COMPONENTS OF WEB EXPERIMENTS. 1. Introduction to Behavioral Research on the Internet. 2. Your First Web Page. 3. Putting the Hyper in Hypertext. 4. Additional Tricks in HTML. 5. Surveys and Experiments Using Forms. 6. Data Analysis with Excel and SPSS. 7. Images and Logical Thinking. 8. A Study of Decision Making. II. MAKING SURVEYS, TESTS, AND EXPERIMENTS. 9. Making Surveys with SurveyWiz. 10. Personality Testing. 11. Using FactorWiz for Factorial Designs. III. APPLICATIONS WITH DATA. 12. Analysis of Impression Formation. 13. Analysis of Social Balance. 14. Presenting Psychophysical Stimuli. 15. Psychological Scaling with "Ratios" and "Differences." 16. Bayesian Inference and Human Inference. IV. PROGRAMMING TECHNIQUES. 17. Introduction to JavaScript. 18. Interactive Web Pages with JavaScript. 19. JavaScript and Forms. V. METHODS AND METHODOLOGY. 20. Advanced Techniques for Experimentation on the Web. 21. Methodology, Ethics, and Publicizing Your Study. Appendix A: Creating and Using Scripts. References. Glossary of Web Terms. Index. <s> BIB004 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Contributors. Introduction to Psychological Experiments on the Internet. General Issues: M.H. Birnbaum, Decision Making in the Lab and on the Web. J.H. Krantz and R. Dalal, Validity of Web-Based Psychological Research. J. Musch and U.-D. Reips, A Brief History of Web Experimenting. U.-D. Reips, The Web Experiment Method: Advantages, Disadvantages, and Solutions. Individual Differences and Cross-Cultural Studies: T. Buchanan, Potential of the Internet for Personality Research. R.D. Bailey, W.E. Foote and B. Throckmorton, Human Sexual Behavior: A Comparison of College and Internet Surveys. D. Pagani and L. Lombardi, An Intercultural Examination of Facial Features Communicating Surprise. J.H. Mueller, D.M. Jacobsen and R. Schwarzer, What Are Computing Experiences Good For: A Case Study in On-Line Research. Computer Techniques for Internet Experimentation: K.O. McGraw, M.D. Tew and J.E. Williams, PsychExps: An On-Line Psychology Laboratory. J. Baron and M. Siepmann, Techniques for Creating and Using Web Questionnaires in Research and Teaching. G. Francis, I. Neath, and A. Surprenant, The Cognitive Psychology Online Laboratory. W.C. Schmidt, The Server Side of Psychology Web Experiments. Conclusion. References. Glossary of Web Terms. Index. <s> BIB005 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Publisher Summary This chapter discusses the history of Web-based research experiments. The computerized experimenting was first introduced in 1970s. It revolutionized traditional laboratory research with its attractive new features such as standardized and controlled presentation of stimuli, item-branching capabilities, immediacy of data entry, elimination of missing responses, elimination of transcription costs and errors, and accurate measurements of response times. Today, most human experimental research in psychology is aided by computer automation. Usage of extending computerized experimenting beyond single PCs, local computer networks is collecting data rapidly and programs are being written in high-level languages such as C ++ , Pascal, or Delphi, or with program packages such as Super-Lab, Psy-Scope and MEL to optimize the data collecting tools in order to get quality data. Although, at the moment, the number of Web experiments is still small, a rapid growth can be predicted on the basis of the present result. <s> BIB006 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Whereas in a classical laboratory setting participants often feel compelled to stay and finish the experiment, participants in online studies can leave the session at any time. Though, from an ethical point of view this is an advantage of online studies, it might pose methodological problems. Of course, web experimenters would like their participants to stay until the end of the experiment. To ensure this they use special techniques. One such technique is to make web pages shorter and more attractive the further the participant gets. If a web page has a long loading time at the beginning participants with small interest or little time leave right away without even starting the experiment. This “high hurdle technique” is particularly effective in combination with a warm-up phase (Reips, 1996, 1999). A second frequently used technique to prevent participants from leaving is to initially announce a lottery with prizes, in which only those who finish the experiment can take part. Whether or not this procedure is a successful method to reduce the drop-out rate has never been examined experimentally. One might argue that promising financial incentives is negligible in reducing drop-out or might even reduce the intrinsic motivation of the potential participant (Deci, 1975). A survey among 21 web experimenters recently conducted by Musch and Reips (in press) suggests that this is not the case. In contrast to the expectation of a purely intrinsic motivation to participate in online studies they found a clear link between lack of financial incentives and drop-out rate. A monetary prize might diminish dropout tendency whenever intrinsically motivating factors are not sufficient. The web experiment at hand has been conducted to further investigate the causal nature of the relationship between financial incentives and drop-out. The experiment was also designed to test the hypothesis that asking participants for personal information early in the experiment would lead to increased drop-out as well as different answering behavior in questions that are likely to be influenced by social desirability. Participants’ answers might be more strongly influenced by social norms, if they believe they could be identified (e. g., by their e-mail address). Or they might discontinue participation in the experiment if they realize that their behavior would force them to answer contrary to what is usually desired or accepted. The question whether personal or demographic data should be assessed at the beginning <s> BIB007 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> In fall 1995, the worldwide-accessible Web Experimental Psychology Lab (http://www.genpsylab. unizh.ch) opened its doors to Web surfers and Web experimenters. It offers a frequently visited place at which to conduct true experiments over the Internet. Data from 5 years of laboratory running time are presented, along with recommendations for setting up and maintaining a virtual laboratory, including sections on the history of the Web laboratory and of Web experimenting, the laboratory’s structure and design, visitor demographics, the Kids’ Experimental Psychology Lab, access statistics, administration, software and hardware, marketing, other Web laboratories, data security, and data quality. It is concluded that experimental data collection via the Internet has proven to be an enrichment to science. Consequently, the Web Experimental Psychology Lab will continue and extend its services to the scientific community. <s> BIB008 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Internet-based psychological experimenting is presented as a method that needs careful consideration of a number of issues-from potential data corruption to revealing confidential information about participants. Ten issues are grouped into five areas of actions to be taken when developing an Internet experiment (dos) and five errors to be avoided (don'ts). Dos include: (a) utilizing dropout as a dependent variable, (b)the use of dropout to detect motivational confounding, (c) placement of questions for personal information, (d) using a collection of techniques, and (e) using Internet-based tools. Don'ts are about: (a) unprotected directories, (b) public access to confidential data, (c) revealing the experiment's structure, (d) ignoring the Internet's technical variance, and (e) improper use of form elements. <s> BIB009 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Introduction. Getting Started: A Step-by-Step Guide to Using a Web Server. HTML: How to Make a Web Page from Scratch. HTML Forms: Collecting Research Data from Participants via the Internet. An Introduction to CGI Scripting: Using Perl to Automatically Save Response Data to a File. Providing Customized Feedback to Research Participants. Randomizing the Order of Stimuli. Random Assignment of Participants to Conditions. Using Multiple Web Pages in Research: Carrying Responses Forward from One Page to the Next. Using Conditional Branching Structures: An Example of 'Skip Patterns' in Survey Research. Advanced Feedback: Summarizing Data with Bar Graphs and Two-Dimensional Plots. Tracking Participants over Multiple Sessions: PINs, Passwords, and Completion Menus. Measuring Reaction Times. Additional Applications of Perl: Discussion Forums and Scored Tests. Wrapping It Up: Server Maintenance, Ethics, Security, Sampling. <s> BIB010 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> TheWeb Experiment List (http://genpsylab-wexlist.unizh.ch/), a free Web-based service for the recruitment of participants in Internet-based experiments, is presented. The Web Experiment List also serves as a searchable archive for the research community. It lists more than 250 links to and descriptions of current and past Web experiments. Searches can be conducted by area of research, language, type of study, date, and status (active vs. archived). Data from log file analyses reveal an increasing use of the Web Experiment List and provide a picture of the distribution of the use of the Web experiment method across disciplines. On a general theoretical note, Web services are discussed as a viable software alternative to the traditional program format. <s> BIB011 </s> The state of web-based research: A survey and call for inclusion in curricula <s> A brief history <s> Internet-based psychological experimenting is presented as a method that needs careful consideration of a number of issues—from potential data corruption to revealing confidential information about participants. Ten issues are grouped into five areas of actions to be taken when developing an Internet experiment (dos) and five errors to be avoided (don’ts). Dos include: (a) utilizing dropout as a dependent variable, (b) the use of dropout to detect motivational confounding, (c) placement of questions for personal information, (d) using a collection of techniques, and (e) using Internet-based tools. Don’ts are about: (a) unprotected directories, (b) public access to confidential data, (c) revealing the experiment’s structure, (d) ignoring the Internet’s technical variance, and (e) improper use of form elements. <s> BIB012
Before examining the current state of web research, it seems good to review how the field got to this point in a brief overview. When the first researchers were using the web, they had no guide other than curiosity about this method. As such, these first researchers were careful and exploratory. The primary focus of a number of the early studies was to determine if the web yielded reliable and valid results (e.g., BIB001 BIB002 and if it could help in solving some of the issues that limited laboratory research, such as low power, limited external/ecological validity, and low generalizability BIB003 . It is a bit of an understatement, given the large number of online studies conducted today, that many of the early findings were encouraging. Still, there were issues faced by these researchers that were thought to potentially impact the quality of the data. Some of the issues that have been studied were the high rates of drop-out of web participants, the possibility of data fraud, multiple submissions, differences in measurement, and the possibility that web samples differed from laboratory samples and -for some research purposes -from the population at large. These issues led some of these early researchers to examine the issues and if possible find ways to ameliorate them. Many researchers collected both laboratory and web samples to compare the results (e.g., Krantz, 1997 -and other examples) . In this way, differences between web and laboratory samples could be observed. Sometimes the laboratory and web led to different results (see BIB002 for a summary of some of these early studies), leading to what later became known as the (non-)-equivalence debate (e.g., . Ulf-Dietrich Reips (e.g., 1996 Reips (e.g., , 1999 examined and developed techniques such as the high hurdle and warm-up techniques to reduce the degree of dropout during the study BIB007 . He also pioneered the multiple-site entry technique to develop a way to determine if different ways of accessing the study or different sampling leads to different responses in studies BIB003 BIB009 . Researchers also examined different methods to test for data fraud . The growth of the use of the web was rapid BIB008 ). Data in Fig. 1 from three prominent sites that list online psychological studies show the rapid growth of psychological research in the early years of web research BIB011 . Other evidence of the growth of web research as a legitimate method can be found in the writing of textbooks on the topic. The first textbook was by BIB004 , who developed an approach he termed the lowest common denominator (or Bbare bones^) using the simplest techniques possible to minimize the barrier between participant and experiment. Particularly in the early days of research when bandwidth was more limited, using sophisticated methods for web research could impose a limitation on the sample collected. The method used html, simple javascript, and cgi for data collection. The NSF and APA had sponsored several advanced training institutes with Birnbaum, Göritz, Krantz, McClelland, McGraw, Reips, Schmidt, and Williams (materials at http://ati.fullerton.edu/ or in a more recent version: http://iscience.uni-konstanz.de/archive/reips/ upto2005site/) to help scholars learn these techniques. After attending one of these training institutes, BIB010 developed a text with a more advanced technical approach to developing online studies. However, this text still relied on cgi for data collection -a method many modern servers do not employ. Early edited books include Internet für Psychologen (Batinic, 1997) , Online Research (Batinic, Werner, Gräf, & Bandilla, 1999) , Psychological Experiments on the Internet BIB005 , Dimensions of Internet science (Reips & Bosnjak, 2001) , Online Social Sciences (Batinic, Reips, & Bosnjak, 2002) , and The Oxford Handbook of Internet Psychology (Joinson, McKenna, Postmes, & Reips, 2007) with a section on Internet-based research. Moreover, in 2010, the APA published an advanced text on web research (edited by Gosling & Johnson, 2010) . Notable are two early special journal issues in Experimental Psychology, edited by BIB012 and in Social Science Computer Review, edited by . More recently, there have been several articles that have examined the use of crowd-sourcing as a method of collecting data. The most common method has been to use Amazon's Mechanical Turk as a method (e.g., Burmester, Kwang, & Gosling, 2011; . Participants are called workers as they are paid and might do any number of tasks, not just psychological studies. The idea is that these workers would be better motivated to complete the studies since they are paid, small sums usually, for completion. Many studies find that data quality is comparable to studies posted on the open web but there are issues of non-naïveté among participants as they often repeat similar types of studies and because they organize in unions and use forums to communicate about the tasks. Our own work finds Turkers to Krantz, 1996 Reips, 1995 Fig. 1 The rapid increase in the number of studies posted on two of the major sites for posting studies in the early years of Web Research. Data from the Web Experimental Psychology Lab include data from his later site, the web experiment list BIB011 . produce lower quality data than participants from other online sources, e.g. in a personality test development task they had faster response times and increasingly more so, and out of 64 items with different means, Turkers scored more in the middle of the scale in 50 items . A picture of the current state At the current time, there is vigorous use of the web for psychological research and related fields. Experienced researchers and students use the web frequently to conduct research. For example, posted links to over 500 studies last year. Many of the studies are being conducted by student researchers, both undergraduate and graduate. However, there does not seem to be a coherent approach to educating these new researchers. Neither Birnbaum (2001) nor BIB010 deal with methods of data communication that many servers do not support. Moreover, these books do not cover more recent methods for data communication like AJAX and JSON. In addition, a quick review of textbooks for undergraduate research methods classes find that none of them cover online methodologies (e.g., Lewandowski Jr., Ciarocco, & Strohmertz, 2016; Nester & Schutt, 2015) . Several cover specialized methods such as qualitative methodologies, case studies, and single-subject designs, but no mention is made of doing research online, let alone the specialized techniques appropriate for doing research online. This lack of coverage stands in stark contrast to the number of undergraduates engaging in online research as part of their undergraduate education. Both authors can attest to posting a large number of links to undergraduate research studies on their sites BIB011 . This use of online research methodology by undergraduates has also been mentioned by colleagues (e.g., Mangan, personal communication, 17 November, 2016) and is evident from many departments licensing commercial online software and the many invitations to teach workshops and summer schools the authors and other pioneers of Internet-based research received and keep receiving. It seems, given the frequency of posting of online studies by undergraduates, that it is more likely that students will encounter online research methods than these important but less common methods such as those commonly mentioned in undergraduate textbooks. Anecdotal evidence from the authors has found some consequences of this lack of education for the use of online research methods. In our experience in posting links to online research methods, we have continuously experienced issues with inadequately constructed studies. For example, it is not uncommon to have studies with titles that are full of demand characteristics. One author (JK) just recently had a research supervisor e-mail him to correct a title just for this reason, a first. Titles are often needlessly long, written obviously for an academic audience, and will not communicate clearly to a general audience. BIB009 lists five common methodological and security issues he frequently observed in Internet-based experimenting, i.e. unprotected directories, public access to confidential data, revealing the experiment's design and/or structure, ignoring the Internet's technical variance, and -very frequently -improper use of form elements. For example, he states that about one-third of studies submitted at the time for inclusion with the Web Experimental Psychology Lab BIB008 or the web experiment list BIB011 contain dysfunctional or biasing form elements, such as selection menus with pre-selected content options that will enter the pre-selected value in case of skipping over the item. Figure 2 from shows several of these widespread mistakes as they appear in real examples from the web. Even more problematic -as ethically questionable -are studies that carelessly use materials originally intended for limited offline use (e.g., face picture databases), use deception, or address sensitive topics, which carries special issues when the researcher is not be present with the participant. There are even studies wanting to be posted that lack basic design issues such as browser or smartphone compatibility, contact information, or informed consent for the participant. These problems could well be the result of the lack of education in online research methods mentioned above. However, as scholars, we are all aware of the problems of anecdotal evidence. Thus, a survey of current researchers using the web was conducted. The survey serves as an update of the survey by BIB006 conducted on early experimental researchers on the web. Many of the questions were copied over with some new questions added and a few modifications to deal with the changes since the original survey was conducted. While this survey asks some of the same questions as Gureckis et al. (2015) , there are a few differences. The questions, while overlapping, are not the same. The present survey is more comparable to Musch and Reips and asks a broader array of questions. Moreover, the present survey examines those that have done online research while Gureckis et al. also included participants that had not conducted an online study. Where there is overlap, the similarity or differences will be noted.
The state of web-based research: A survey and call for inclusion in curricula <s> Results <s> A portable warning device is mounted on the deck of a boat at anchorage in a harbor and provides an audible warning signal in response to changes in the wind direction relative to the boat. The warning device comprises a pivotable wind vane for sensing the wind direction and a movable electric contact is connected to the wind vane to undergo pivotal movement therewith. A pair of annular electric contact segments are disposed stationary relative to the boat and positioned in spaced-apart relationship along the path of travel of the movable contact to alternatively engage with the movable contact whenever the sensed wind direction relative to the boat exceeds a certain value. A normally open electric circuit is electrically connected to the movable contact and the contact segments such that movement of the movable contact into engagement with either one of the electric contact segments closes the electric circuit. The electric circuit includes a warning device operative when actuated to emit the audible warning signal, and a time delay device for delaying the actuation of the warning device for a predetermined time period after the electric circuit is closed. <s> BIB001 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Results <s> This article reviews recent findings that violate a broad class of descriptive theories of decision making. A new study compared 1,224 participants tested via the Internet and 124 undergraduates tested in the laboratory. Both samples confirmed systematic violations of stochastic dominance and cumulative independence; new tests also found violations of coalescing. The Internet sample was older, more highly educated, more likely male, and also more demographically diverse than the lab sample. Internet participants were more likely than undergraduates to choose the gamble with higher expected value, but no one conformed exactly to expected value. Violations of stochastic dominance decreased as education increased, but violations of stochastic dominance and coalescing were still substantial in persons with doctoral degrees who had read a scientific work on decision making. In their implications, Internet research and lab findings agree: Descriptive decision theories cannot assume that identical consequences c... <s> BIB002 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Results <s> Publisher Summary This chapter discusses the history of Web-based research experiments. The computerized experimenting was first introduced in 1970s. It revolutionized traditional laboratory research with its attractive new features such as standardized and controlled presentation of stimuli, item-branching capabilities, immediacy of data entry, elimination of missing responses, elimination of transcription costs and errors, and accurate measurements of response times. Today, most human experimental research in psychology is aided by computer automation. Usage of extending computerized experimenting beyond single PCs, local computer networks is collecting data rapidly and programs are being written in high-level languages such as C ++ , Pascal, or Delphi, or with program packages such as Super-Lab, Psy-Scope and MEL to optimize the data collecting tools in order to get quality data. Although, at the moment, the number of Web experiments is still small, a rapid growth can be predicted on the basis of the present result. <s> BIB003 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Results <s> One major advantage of Web-based research lies in its ability to reach and study people who have rare conditions of interest. Another advantage is that, due to the anonymity of the survey situation, the Internet is particularly suited for surveys on sensitive topics. Sexsomnia is a newly identified medical condition whose sufferers engage in sexual behavior during their sleep. Problematic cases are highly distressing and have forensic implications. The consensus among opinion leaders in sleep medicine is that sexsomnia may be quite common but that it often goes unreported because of shame and embarrassment. Thus, little is known about this condition’s demographics and clinical features. This article reports findings from a sample analysis of 20 years of research on sexsomnia and discusses the results, strengths, and weaknesses of a recent Web-based survey conducted on the difficult-to-reach clinical population that suffers from sexsomnia. <s> BIB004 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Results <s> The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible. <s> BIB005
One of the first questions examined by BIB003 is a set of questions asking BHow important were the following factors for your decision to conduct your research on the web?^The questions were asked on a 7-point Likert scale scored from 0 to 6 with the higher the rating the more important the reason. The answers ranged from BNot important at all^to BVery important.^The same questions were asked in the current study with an additional question to rate the ease of doing the study. Figure 3 shows the average and 95% confidence intervals for the ratings on these questions. The dark bars are from the BIB003 study and the lighter bars are from the current participants. Overall, it seems that similar factors were important to the early and current researchers. Number of participants and statistical power are very important to both the original and current participants. Large sample size also turned up as important to almost all of the respondents in Gureckis et al. (2015) . It is interesting to note that this desire for larger samples sizes does not seem to translate to overall greater statistical power . One apparent difference in the present results and those of Gureckis et al. is the fact that almost all participants reported fast data collection as important, the most often reported benefit in that study, while speed is important but not the most important item in either the present survey or BIB003 . The difference may lay in the way participants responded. In Gureckis, the number of participants that selected each option was recorded, and in the present study and that by Musch and Reips, the participants rank ordered the importance of speed of data collection. These combined studies suggest that speed is important to almost all researchers but perhaps rarely the most important criteria in doing a web study. The ability to replicate lab studies and reach special populations remains less important. The lack of interest in reaching special populations is somewhat perplexing as this is one of the unique abilities of online research to greatly extend the boundaries of psychological knowledge BIB002 BIB004 . The cost of the study is more important in the current sample. This factor was mentioned by about 75% of the respondents in Gureckis et al. (2015) . This change may reflect the larger number of undergraduate students, the changes in ease of funding research, or a change in the population that does online research. A combination of reasons is also possible. The newer question about the ease of doing the study is also very highly rated. The current data suggests that cost and ease of study are the two most important factors in doing online research currently, though number of participants is nearly as highly rated. The next question asked by BIB003 asked researchers to rank responses to a series of issues related to the question, BHow problematic do you think were the following potential problems in your study?^The same scale and anchors were used. The results from both BIB003 and the current sample are shown in Fig. 4 . The pattern of concerns is very similar for both groups. Most of the concerns are at the midpoint of the rankings of importance or below. Two issues that trend towards being more important in the current sample, are manipulation/fraud and ethical problems but these issues are still not seen as very important. It is possible that the original sample was more concerned with hardware issues but that study focused on experiments that might be more impacted by hardware and there was great variation in that small sample over that concern. The next two sets of questions were asked just of the current sample. The first unique question to be examined deals with how they Bdetermine the quality of your study design and instruments?^Rigorous testing of a study is particularly important on the web since a participant might use a wide range of devices and be in a wide range of environments (Krantz & Dalal, 2001) . The importance has only grown given the increased usage of mobile devices, particularly phones, to run studies, and for example Reips (e.g., 2002 Reips (e.g., , 2010 regularly emphasizes various stages of pre-testing of online study materials with different types of pre-testers (experimenter, experts, friends, sample from sampled populations) in courses to undergraduates, because lack of pre-testing is one of the largest predictors of failures in Internet-based research. Figure 5 shows the percentage of respondents from the current sample that indicated that they used any of these testing methods. They could select any number of options. As can be seen, the most common way to test their study is to use Bpre-existing and tested materials.^No other testing method reaches 50% of the participants. Two researchers indicated an Bother^method of testing. In the comments section for this question, one indicated using a non-researcher to run the survey with comments and the other indicated using the method of survey testing outlined by de BIB001 . The final questions to be examined here regard the familiarity of the participants with the research, in general, and online research in particular. The participants ranked their experience as a researcher, a web researcher, and with the literature on web research on a visual analog scale BIB005 . The anchors were BNovice,^scored 0, and BHighly Skilled,^scored 200. The results of their responses to these questions are shown in Fig. 6 . The bars indicate their mean responses. The error bars are standard deviations and the dots are each of the individual responses. There are only 64 responses to these questions. As can be seen clearly from the graph, there is a wide range of responses to these questions with the mean capturing little of the information. The lowest average ranking is in the self-reported knowledge of the literature on web research, but participants used nearly the full range of the scale on all three question. While the range is still large on the self-reported knowledge of the web research literature, it does suggest that many researchers are not reading extensively before performing an online study. Bolstering this conclusion, there is a troublingly strong positive correlation between web research experience and familiarity with the web literature, r (62) = 0.75, p < 0.01. The least experienced web researchers do not seem to be spending the time reading the literature and learning its content before conducting their Musch and Reips (2008) and the light bars are from the current study. The overall pattern is similar but it seems that cost has become much more important to current researchers. Ease of study is also important to these researchers study. Adding this correlation to the observation of the lack of information on web research methods in textbooks and the reliance on previously used materials, it suggests a lack of preparation for doing online research among the least experienced researchers (Fig. 6 ).
The state of web-based research: A survey and call for inclusion in curricula <s> Discussion <s> Publisher Summary This chapter discusses the history of Web-based research experiments. The computerized experimenting was first introduced in 1970s. It revolutionized traditional laboratory research with its attractive new features such as standardized and controlled presentation of stimuli, item-branching capabilities, immediacy of data entry, elimination of missing responses, elimination of transcription costs and errors, and accurate measurements of response times. Today, most human experimental research in psychology is aided by computer automation. Usage of extending computerized experimenting beyond single PCs, local computer networks is collecting data rapidly and programs are being written in high-level languages such as C ++ , Pascal, or Delphi, or with program packages such as Super-Lab, Psy-Scope and MEL to optimize the data collecting tools in order to get quality data. Although, at the moment, the number of Web experiments is still small, a rapid growth can be predicted on the basis of the present result. <s> BIB001 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Discussion <s> A short five-factor personality inventory developed from the International Personality Item Pool (IPIP) was imple- mented as an online questionnaire and completed by 2,448 participants. Following factor analyses, a revised version was created with acceptable reliability and factor univocal scales. As preliminary evidence of construct validity, support was found for 25 hypothesized links with self-reports of relevant behaviors and demographic variables. In a replication using a different recruiting strategy to test for differences due to motivational factors, similar results were obtained. This set of scales appears to provide acceptable measures of the Five-Factor Model for use in internet-mediated research. <s> BIB002 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Discussion <s> Online experiments are growing in popularity, and the increasing sophistication of Web technology has made it possible to run complex behavioral experiments online using only a Web browser. Unlike with offline laboratory experiments, however, few tools exist to aid in the development of browser-based experiments. This makes the process of creating an experiment slow and challenging, particularly for researchers who lack a Web development background. This article introduces jsPsych, a JavaScript library for the development of Web-based experiments. jsPsych formalizes a way of describing experiments that is much simpler than writing the entire experiment from scratch. jsPsych then executes these descriptions automatically, handling the flow from one task to another. The jsPsych library is open-source and designed to be expanded by the research community. The project is available online at www.jspsych.org . <s> BIB003 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Discussion <s> Online data collection has begun to revolutionize the behavioral sciences. However, conducting carefully controlled behavioral experiments online introduces a number of new of technical and scientific challenges. The project described in this paper, psiTurk, is an open-source platform which helps researchers develop experiment designs which can be conducted over the Internet. The tool primarily interfaces with Amazon’s Mechanical Turk, a popular crowd-sourcing labor market. This paper describes the basic architecture of the system and introduces new users to the overall goals. psiTurk aims to reduce the technical hurdles for researchers developing online experiments while improving the transparency and collaborative nature of the behavioral sciences. <s> BIB004 </s> The state of web-based research: A survey and call for inclusion in curricula <s> Discussion <s> In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform. <s> BIB005
The findings of the current survey indicate that, compared to BIB001 , current researchers are more concerned about the low cost of doing an online study than before. The ease of doing a study is also highly rated. In addition, current participants trend toward being more concerned about fraud and ethical issues than original participants, but this increase is tempered by the fact that they still do not indicate great concern with either issue. Researchers seem to rely on previously validated materials for testing their studies and indicate a wide range of familiarity with the literature on web research. On the positive side, many researchers do take the time to gain experience with web research in particular and the literature on doing web research. It is clear that some researchers are aware of the need to test their studies and validate their particular study. However, there are several signs of concern. Beginning with the lack of presentation on web research in undergraduate textbooks there are a string of issues that suggest that many web researchers do not approach conducting an online study thinking about the unique issues raised by these online methods. First, researchers seem primarily motivated by number of participants, cost, and ease when choosing the web as a research platform (Fig. 3) . It is particularly noteworthy that the ability to access special populations is not a highly ranked reason for doing web-based research. These choices suggest that the principal motivation for doing online research is convenience rather than considerations of whether this method is the best for getting the answers sought. When these observations are combined with the modest level of concern about issues of online research, the level of testing the study, and the fact that some researchers engage in online research with both little experience and little knowledge of web research methods, there is cause for concern that too much of the web research being conducted is being conducted inadequately. It would be interesting to see what happens as more researchers use the emerging technologies to help them develop online studies (e.g., de BIB003 BIB004 BIB005 . Some of these methods are linked to crowd-sourcing, particularly Amazon Turk BIB004 BIB005 , but others seem designed to help research for studies using the web at large BIB003 . On the positive side, many of the best practices of web research can be incorporated into the online study methodology, which will reduce the need for the researchers to be informed about these best practices. For example, Reips designed WEXTOR (http://wextor.eu) from the beginning to automatically guide and nudge study authors into using best practices (e.g. non-obvious file naming) when creating web experiments with the tool. However, it seems unwise to completely rely on the study development platform to take care of all the pertinent design principles. A useful feature of these technologies would be tutorials and queries to help researches know what practices they ought to follow. For example, testing a study on multiple platforms is quite important. In experiments, this step can be vital . A built-in query could ask if the study has been tested on different platforms when it is about to be published, much like you get queries when you try to delete a file. With so much of the literature available online these days, links to pertinent papers in the tool would be helpful as well. It still should be noted that these development tools do not help with one of the most persistent issues in all psychological research performed on computers, and that is the reliance on consumer grade equipment . Our survey has its limitations. For example, as a self-report it cannot tell the difference between what is claimed and what is true about behavior and knowledge. While many researchers reported their materials had been validated, we do not know if they were validated for use on the Internet -and, in fact, many researchers may not be aware of the related literature that determined that an instrument needs to be tested in the mode it is later to be used in; online study materials need to be validated for online use BIB002 . Of course the rapid development of Internet-based research methods, including the use of mobile devices for tracking throughout a period of time (e.g., Stieger, Lewetz, & Reips, manuscript submitted for publication) and novel ways of using these devices, such as the accelerometer (Kuhlmann, Reips, & Stieger, 2017) , to do non-reactive measurements could make textbook authors leery of adding web-based methods as they are in constant flux. However, most students will not be using the most advanced methods and the development of the apps for these studies is beyond the ability of most students. Most students will be conducting either experiments or surveys over the web and as such would benefit from basic instruction in web-based research methods. Perhaps the best response from the data here is for those faculties who have students who conduct research on the web to contact publishers of their research methods textbook to add content related to web-based research methods.
Feature Selection of Gene Expression Data for Cancer Classification: A Review☆ <s> Feature selection method <s> In this paper, we review the problem of selecting rele- vant features for use in machine learning. We describe this problem in terms of heuristic search through a space of feature sets, and we identify four dimensions along which approaches to the problem can vary. We consider recent work on feature selection in terms of this framework, then close with some challenges for future work in the area. 1. The Problem of Irrelevant Features accuracy) to grow slowly with the number of irrele- vant attributes. Theoretical results for algorithms that search restricted hypothesis spaces are encouraging. For instance, the worst-case number of errors made by Littlestone's (1987) WINNOW method grows only logarithmically with the number of irrelevant features. Pazzani and Sarrett's (1992) average-case analysis for WHOLIST, a simple conjunctive algorithm, and Lang- ley and Iba's (1993) treatment of the naive Bayesian classifier, suggest that their sample complexities grow at most linearly with the number of irrelevant features. However, the theoretical results are less optimistic for induction methods that search a larger space of concept descriptions. For example, Langley and Iba's (1993) average-case analysis of simple nearest neighbor indicates that its sample complexity grows exponen- tially with the number of irrelevant attributes, even for conjunctive target concepts. Experimental stud- ies of nearest neighbor are consistent with this conclu- sion, and other experiments suggest that similar results hold even for induction algorithms that explicitly se- lect features. For example, the sample complexity for decision-tree methods appears to grow linearly with the number of irrelevants for conjunctive concepts, but exponentially for parity concepts, since the evaluation metric cannot distinguish relevant from irrelevant fea- tures in the latter situation (Langley & Sage, in press). Results of this sort have encouraged machine learn- ing researchers to explore more sophisticated methods for selecting relevant features. In the sections that fol- low, we present a general framework for this task, and then consider some recent examples of work on this important problem. <s> BIB001 </s> Feature Selection of Gene Expression Data for Cancer Classification: A Review☆ <s> Feature selection method <s> Abstract In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes. <s> BIB002 </s> Feature Selection of Gene Expression Data for Cancer Classification: A Review☆ <s> Feature selection method <s> One of the objectives of designing feature selection learning algorithms is to obtain classifiers that depend on a small number of attributes and have verifiable future performance guarantees. There are few, if any, approaches that successfully address the two goals simultaneously. To the best of our knowledge, such algorithms that give theoretical bounds on the future performance have not been proposed so far in the context of the classification of gene expression data. In this work, we investigate the premise of learning a conjunction (or disjunction) of decision stumps in Occam's Razor, Sample Compression, and PAC-Bayes learning settings for identifying a small subset of attributes that can be used to perform reliable classification tasks. We apply the proposed approaches for gene identification from DNA microarray data and compare our results to those of the well-known successful approaches proposed for the task. We show that our algorithm not only finds hypotheses with a much smaller number of genes while giving competitive classification accuracy but also having tight risk guarantees on future performance, unlike other approaches. The proposed approaches are general and extensible in terms of both designing novel algorithms and application to other domains. <s> BIB003
There are two types of feature selection methods have been studied: filter methods BIB001 and wrapper methods BIB002 . Filter methods are essentially data preprocessing or data filtering methods. Features are selected based on the intrinsic characteristics that determine their relevance or discriminative powers with regard to the target classes. In wrapper methods, feature selection is "wrapped" around a learning method: the usefulness of a feature is directly judged by the estimated accuracy of the learning method. Wrapper methods typically require extensive computation to search for the best features. The discriminating criteria is being used by filter method for feature selection. The correlation coefficient or statistical test like t-test or f-test is used to filter the features in the filter feature selection method. Many interesting results were obtained by researchers aiming to distinguish between two or more types of cells (e.g., diseased versus normal, or cells with different types of cancers), based on gene expression data in the case of DNA microarrays. Since microarray data have large amount of data and attributes, which makes complex for researcher to do analysis. A small subset of genes is easier to analyze as opposed to the set of genes available in DNA microarray chips. Therefore it is important to focus on very few genes to give insight into the class association for a microarray sample. It also makes it relatively easier to deduce biological relationships among them as well as to study their interactions. In paper BIB003 they obtained feature selection algorithms for classification with tight realizable guarantees on their generalization error. The proposed approaches are a step toward which are more general learning strategies that combine feature selection with the classification algorithm and have tight realizable guarantees. They classified microarray data, where the attributes of the data sample correspond to the expression level measurements of various genes was considered. They chosen decision stumps as learning bias, which is in part been motivated by this application. [6] In this paper they introduced an evolutionary rough feature selection algorithm for classifying microarray gene expression pattern. Microarray data typically consist of large number of redundant features; therefore an initial redundancy reduction of attributes was done to enable faster convergence. The main aim was to retain only those genes that play a vital role in discerning between objects. Rough set theory was employed to generate reducts, which represent the minimal sets of non redundant features capable of discerning between all objects, in a multiobjective framework.