aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1908.01549
2964399413
Code obfuscation is a major tool for protecting software intellectual property from attacks such as reverse engineering or code tampering. Yet, recently proposed (automated) attacks based on Dynamic Symbolic Execution (DSE) shows very promising results, hence threatening software integrity. Current defenses are not fully satisfactory, being either not efficient against symbolic reasoning, or affecting runtime performance too much, or being too easy to spot. We present and study a new class of anti-DSE protections coined as path-oriented protections targeting the weakest spot of DSE, namely path exploration. We propose a lightweight, efficient, resistant and analytically proved class of obfuscation algorithms designed to hinder DSE-based attacks. Extensive evaluation demonstrates that these approaches critically counter symbolic deobfuscation while yielding only a very slight overhead.
We have already discsussed obfuscation, symbolic execution and symbolic deobfuscation at length throughout the paper, including successful applications of DSE-related techniques to deobfuscation @cite_17 @cite_48 @cite_44 @cite_46 . In addition, @cite_2 give an exhaustive survey about program analysis-based deobfuscation, while @cite_30 review DSE, tainting and their applications in security.
{ "cite_N": [ "@cite_30", "@cite_48", "@cite_44", "@cite_2", "@cite_46", "@cite_17" ], "mid": [ "2560252021", "2620363344", "2708742135", "2751013686" ], "abstract": [ "Code obfuscation is widely used by software developers to protect intellectual property, and malware writers to hamper program analysis. However, there seems to be little work on systematic evaluations of effectiveness of obfuscation techniques against automated program analysis. The result is that we have no methodical way of knowing what kinds of automated analyses an obfuscation method can withstand. This paper addresses the problem of characterizing the resilience of code obfuscation transformations against automated symbolic execution attacks, complementing existing works that measure the potency of obfuscation transformations against human-assisted attacks through user studies. We evaluated our approach over 5000 different C programs, which have each been obfuscated using existing implementations of obfuscation transformations. The results show that many existing obfuscation transformations, such as virtualization, stand little chance of withstanding symbolic-execution based deobfuscation. A crucial and perhaps surprising observation we make is that symbolic-execution based deobfuscators can easily deobfuscate transformations that preserve program semantics. On the other hand, we present new obfuscation transformations that change program behavior in subtle yet acceptable ways, and show that they can render symbolic-execution based deobfuscation analysis ineffective in practice.", "Control flow obfuscation techniques can be used to hinder software reverse-engineering. Symbolic analysis can counteract these techniques, but only if they can analyze obfuscated conditional statements. We evaluate the use of dynamic synthesis to complement symbolic analysis in the analysis of obfuscated conditionals. We test this approach on the taint-analysis-resistant Mixed Boolean Arithmetics (MBA) obfuscation method that is commonly used to obfuscate and randomly diversify statements. We experimentally ascertain the practical feasibility of MBA obfuscation. We study using SMT-based approaches with different state-of-the-art SMT solvers to counteract MBA obfuscation, and we show how targeted algebraic simplification can greatly reduce the analysis time. We show that synthesis-based deobfuscation is more effective than current SMT-based deobfuscation algorithms, thus proposing a synthesis-based attacker model to complement existing attacker models.", "Software deobfuscation is a crucial activity in security analysis and especially in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed as an interesting alternative, more robust than staticanalysis and more complete than dynamic analysis. Yet, DSE addresses only certain kinds of questions encountered by a reverser, namely feasibility questions. Many issues arising during reverse, e.g., detecting protection schemes such as opaque predicates, fall into the category of infeasibility questions. We present Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware – allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we proposesparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries.", "Software obfuscation transforms code such that it is more difficult to reverse engineer. However, it is known that given enough resources, an attacker will successfully reverse engineer an obfuscated program. Therefore, an open challenge for software obfuscation is estimating the time an obfuscated program is able to withstand a given reverse engineering attack. This paper proposes a general framework for choosing the most relevant software features to estimate the effort of automated attacks. Our framework uses these software features to build regression models that can predict the resilience of different software protection transformations against automated attacks. To evaluate the effectiveness of our approach, we instantiate it in a case-study about predicting the time needed to deobfuscate a set of C programs, using an attack based on symbolic execution. To train regression models our system requires a large set of programs as input. We have therefore implemented a code generator that can generate large numbers of arbitrarily complex random C functions. Our results show that features such as the number of community structures in the graphrepresentation of symbolic path-constraints, are far more relevant for predicting deobfuscation time than other features generally used to measure the potency of controlflow obfuscation (e.g. cyclomatic complexity). Our best model is able to predict the number of seconds of symbolic execution-based deobfuscation attacks with over 90 accuracy for 80 of the programs in our dataset, which also includes several realistic hash functions." ] }
1908.01549
2964399413
Code obfuscation is a major tool for protecting software intellectual property from attacks such as reverse engineering or code tampering. Yet, recently proposed (automated) attacks based on Dynamic Symbolic Execution (DSE) shows very promising results, hence threatening software integrity. Current defenses are not fully satisfactory, being either not efficient against symbolic reasoning, or affecting runtime performance too much, or being too easy to spot. We present and study a new class of anti-DSE protections coined as path-oriented protections targeting the weakest spot of DSE, namely path exploration. We propose a lightweight, efficient, resistant and analytically proved class of obfuscation algorithms designed to hinder DSE-based attacks. Extensive evaluation demonstrates that these approaches critically counter symbolic deobfuscation while yielding only a very slight overhead.
@cite_34 describe, in the setting of automatic testing, the three major weaknesses of DSE: , and . Cadar @cite_19 shows that compiler optimizations can sensibly alter the performance of a symbolic analyzer like , confirming the folklore knowledge that strong enough compiler optimizations resemble code obfuscations. That said, the performance penalty is far from offering a strong defense against symbolic deobfuscation.
{ "cite_N": [ "@cite_19", "@cite_34" ], "mid": [ "2330621167", "208073541", "1866018165", "116894366" ], "abstract": [ "A large number of compiler optimizations are nowadays available to users. These optimizations interact with each other and with the input code in several and complex ways. The sequence of application of optimization passes can have a significant impact on the performance achieved. The effect of the optimizations is both platform and application dependent. The exhaustive exploration of all viable sequences of compiler optimizations for a given code fragment is not feasible. As this exploration is a complex and time-consuming task, several researchers have focused on Design Space Exploration (DSE) strategies both to select optimization sequences to improve the performance of each function of the application and to reduce the exploration time. In this article, we present a DSE scheme based on a clustering approach for grouping functions with similarities and exploration of a reduced search space resulting from the combination of optimizations previously suggested for the functions in each group. The identification of similarities between functions uses a data mining method that is applied to a symbolic code representation. The data mining process combines three algorithms to generate clusters: the Normalized Compression Distance, the Neighbor Joining, and a new ambiguity-based clustering algorithm. Our experiments for evaluating the effectiveness of the proposed approach address the exploration of optimization sequences in the context of the ReflectC compiler, considering 49 compilation passes while targeting a Xilinx MicroBlaze processor, and aiming at performance improvements for 51 functions and four applications. Experimental results reveal that the use of our clustering-based DSE approach achieves a significant reduction in the total exploration time of the search space (20× over a Genetic Algorithm approach) at the same time that considerable performance speedups (41p over the baseline) were obtained using the optimized codes. Additional experiments were performed considering the LLVM compiler, considering 124 compilation passes, and targeting a LEON3 processor. The results show that our approach achieved geometric mean speedups of 1.49 × , 1.32 × , and 1.24 × for the best 10, 20, and 30 functions, respectively, and a global improvement of 7p over the performance obtained when compiling with -O2.", "We present the first symbolic execution and automatic test generation tool for C++ programs. First we describe our effort in extending an existing symbolic execution tool for C programs to handle C++ programs. We then show how we made this tool generic, efficient and usable to handle real-life industrial applications. Novel features include extended symbolic virtual machine, library optimization for C and C++, object-level execution and reasoning, interfacing with specific type of efficient solvers, and semi-automatic unit and component testing. This tool is being used to assist the validation and testing of industrial software as well as publicly available programs written using the C++ language.", "Despite the performance potential of multicomputers, several factors have limited their widespread adoption. Of these, performance variability is among the most significant. Execution of some programs may yield only a small fraction of peak system performance, whereas others approach the system's theoretical performance peak. Moreover, the observed performance may change substantially as application program parameters vary. Data parallel languages, which facilitate the programming of multicomputers, increase the semantic distance between the program's source code and its observable performance, thus aggravating the performance problem. In this thesis, we propose a new methodology to predict the performance scalability of data parallel applications on multicomputers. Our technique represents the execution time of a program as a symbolic expression that is a function of the number of processors (P), problem size (N), and other system-dependent parameters. This methodology is based on information collected at compile time. By extending an existing data parallel compiler (Fortran D95), we derive, during compilation, a symbolic model that represents the cost of each high-level program section and, inductively, of the complete program. These symbolic expressions may be simplified externally with current symbolic tools. Predicting performance of the program for a given pair @math requires simply the evaluation of its corresponding cost expression. We validate our implementation by predicting scalability of a variety of loop nests, with distinct computation and communication patterns. To demonstrate the applicability of our technique, we present a series of concrete performance problems where it was successfully employed: prediction of total execution time, identification and tracking of bottlenecks, cross-system prediction, and evaluation of code transformations. These examples show that the technique would be useful both to users, in optimizing and tuning their programs, and to advanced compilers, which would have a means to evaluate the expected performance of a synthesized code. According to the results of our study, by integrating compilation, performance analysis and symbolic manipulation tools, it is possible to correctly predict, in an automated fashion, the major performance variations of a data parallel program written in a high-level language.", "In this paper, we study the problem of automatically finding program executions that reach a particular target line. This problem arises in many debugging scenarios; for example, a developer may want to confirm that a bug reported by a static analysis tool on a particular line is a true positive. We propose two new directed symbolic execution strategies that aim to solve this problem: shortest-distance symbolic execution (SDSE) uses a distance metric in an interprocedural control flow graph to guide symbolic execution toward a particular target; and call-chain-backward symbolic execution (CCBSE) iteratively runs forward symbolic execution, starting in the function containing the target line, and then jumping backward up the call chain until it finds a feasible path from the start of the program. We also propose a hybrid strategy, Mix-CCBSE, which alternates CCBSE with another (forward) search strategy. We compare these three with several existing strategies from the literature on a suite of six GNU Coreutils programs. We find that SDSE performs extremely well in many cases but may fail badly. CCBSE also performs quite well, but imposes additional overhead that sometimes makes it slower than SDSE. Considering all our benchmarks together, Mix-CCBSE performed best on average, combining to good effect the features of its constituent components." ] }
1908.01549
2964399413
Code obfuscation is a major tool for protecting software intellectual property from attacks such as reverse engineering or code tampering. Yet, recently proposed (automated) attacks based on Dynamic Symbolic Execution (DSE) shows very promising results, hence threatening software integrity. Current defenses are not fully satisfactory, being either not efficient against symbolic reasoning, or affecting runtime performance too much, or being too easy to spot. We present and study a new class of anti-DSE protections coined as path-oriented protections targeting the weakest spot of DSE, namely path exploration. We propose a lightweight, efficient, resistant and analytically proved class of obfuscation algorithms designed to hinder DSE-based attacks. Extensive evaluation demonstrates that these approaches critically counter symbolic deobfuscation while yielding only a very slight overhead.
Most anti-DSE techniques target the constraint solving engine through hard-to-solve predicates. The impact on symbolic deobfuscation through the complexification of constraints has been studied by @cite_11 . @cite_41 propose an obfuscation based on expressions @cite_23 to complexify points-to functions , making it harder for solvers to determine the trigger. @cite_28 present a similar obfuscation together with a MBA expression simplifier based on pattern matching and arithmetic simplifications. Cryptographic hash functions hinder current solvers and can replace MBA @cite_5 . In general, formula hardness is difficult to predict, and solving such formulas is a hot research topic. Though cryptographic functions resist solvers up to now, promising attempts @cite_25 exist. More importantly, private keys must also be protected against symbolic attacks, yielding a potentially easier deobfuscation subgoal -- a standard whitebox cryptography issue.
{ "cite_N": [ "@cite_41", "@cite_28", "@cite_23", "@cite_5", "@cite_25", "@cite_11" ], "mid": [ "2620363344", "2708742135", "2963245071", "2751013686" ], "abstract": [ "Control flow obfuscation techniques can be used to hinder software reverse-engineering. Symbolic analysis can counteract these techniques, but only if they can analyze obfuscated conditional statements. We evaluate the use of dynamic synthesis to complement symbolic analysis in the analysis of obfuscated conditionals. We test this approach on the taint-analysis-resistant Mixed Boolean Arithmetics (MBA) obfuscation method that is commonly used to obfuscate and randomly diversify statements. We experimentally ascertain the practical feasibility of MBA obfuscation. We study using SMT-based approaches with different state-of-the-art SMT solvers to counteract MBA obfuscation, and we show how targeted algebraic simplification can greatly reduce the analysis time. We show that synthesis-based deobfuscation is more effective than current SMT-based deobfuscation algorithms, thus proposing a synthesis-based attacker model to complement existing attacker models.", "Software deobfuscation is a crucial activity in security analysis and especially in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed as an interesting alternative, more robust than staticanalysis and more complete than dynamic analysis. Yet, DSE addresses only certain kinds of questions encountered by a reverser, namely feasibility questions. Many issues arising during reverse, e.g., detecting protection schemes such as opaque predicates, fall into the category of infeasibility questions. We present Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware – allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we proposesparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries.", "In this paper we construct preimage attack on the truncated variant of the MD4 hash function. Specifically, we study the MD4-39 function defined by the first 39 steps of the MD4 algorithm. We suggest a new attack on MD4-39, which develops the ideas proposed by H. Dobbertin in 1998. Namely, the special relaxation constraints are introduced in order to simplify the equations corresponding to the problem of finding a preimage for an arbitrary MD4-39 hash value. The equations supplemented with the relaxation constraints are then reduced to the Boolean Satisfiability Problem (SAT) and solved using the state-of-the-art SAT solvers. We show that the effectiveness of a set of relaxation constraints can be evaluated using the black-box function of a special kind. Thus, we suggest automatic method of relaxation constraints generation by applying the black-box optimization to this function. The proposed method made it possible to find new relaxation constraints that contribute to a SAT-based preimage attack on MD4-39 which significantly outperforms the competition.", "Software obfuscation transforms code such that it is more difficult to reverse engineer. However, it is known that given enough resources, an attacker will successfully reverse engineer an obfuscated program. Therefore, an open challenge for software obfuscation is estimating the time an obfuscated program is able to withstand a given reverse engineering attack. This paper proposes a general framework for choosing the most relevant software features to estimate the effort of automated attacks. Our framework uses these software features to build regression models that can predict the resilience of different software protection transformations against automated attacks. To evaluate the effectiveness of our approach, we instantiate it in a case-study about predicting the time needed to deobfuscate a set of C programs, using an attack based on symbolic execution. To train regression models our system requires a large set of programs as input. We have therefore implemented a code generator that can generate large numbers of arbitrarily complex random C functions. Our results show that features such as the number of community structures in the graphrepresentation of symbolic path-constraints, are far more relevant for predicting deobfuscation time than other features generally used to measure the potency of controlflow obfuscation (e.g. cyclomatic complexity). Our best model is able to predict the number of seconds of symbolic execution-based deobfuscation attacks with over 90 accuracy for 80 of the programs in our dataset, which also includes several realistic hash functions." ] }
1908.01549
2964399413
Code obfuscation is a major tool for protecting software intellectual property from attacks such as reverse engineering or code tampering. Yet, recently proposed (automated) attacks based on Dynamic Symbolic Execution (DSE) shows very promising results, hence threatening software integrity. Current defenses are not fully satisfactory, being either not efficient against symbolic reasoning, or affecting runtime performance too much, or being too easy to spot. We present and study a new class of anti-DSE protections coined as path-oriented protections targeting the weakest spot of DSE, namely path exploration. We propose a lightweight, efficient, resistant and analytically proved class of obfuscation algorithms designed to hinder DSE-based attacks. Extensive evaluation demonstrates that these approaches critically counter symbolic deobfuscation while yielding only a very slight overhead.
Yadegari and Debray @cite_31 describe obfuscations thwarting standard byte-level taint analysis, possibly resulting in missing legitimate paths for DSE engines using taint analysis ( does, and do not). It can be circumvented in the case of taint-based DSE by bit-level tainting @cite_31 . combines this idea with input-dependent trigger-based self modifications . Here, the dynamic analysis part of DSE must be able to detect these input-dependent self-modifications. Solutions exist but must be carefully integrated @cite_46 @cite_29 . @cite_16 propose an obfuscation based on mathematical conjectures in the vein of the Collatz conjecture. This transformation increases the number of (symbolic) paths through an input-dependent loop, while the conjecture (should) ensure that the loop always converges to the same result. @cite_36 propose an anti-DSE technique based on encryption and proved to be highly effective, but it requires some form of secret sharing (the key) and thus falls outside the strict scope of MATE attacks that we consider here. @cite_8 recently proposed an obfuscation based on covert channels (timing, etc.) to hide data flow within invisible states. Current tools do not handle correctly this kind of protections. However, the method ensures only probabilistic correctness and thus cannot be applied in every context.
{ "cite_N": [ "@cite_31", "@cite_8", "@cite_36", "@cite_29", "@cite_46", "@cite_16" ], "mid": [ "2077990181", "2211831180", "2152926062", "1991439166" ], "abstract": [ "Existing secret key extraction techniques use quantization to map wireless channel amplitudes to secret bits. This paper shows that such techniques are highly prone to environment and local noise effects: They have very high mismatch rates between the two nodes that measure the channel between them. This paper advocates using the shape of the channel instead of the size (or amplitude) of the channel. It shows that this new paradigm shift is significantly robust against environmental and local noises. We refer to this shape-based technique as Puzzle. Implementation in a software-defined radio (SDR) platform demonstrates that Puzzle has a 63 reduction in bit mismatch rate than the state-of-art frequency domain approach (CSI-2bit). Experiments also show that unlike the state-of-the-art received signal strength (RSS)-based methods like ASBG, Puzzle is robust against an attack in which an eavesdropper can predict the secret bits using planned movements.", "Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all e >; 0, there exist coding schemes of rate R ≥ Cs-e that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice.", "We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).", "We give a generic divide-and-conquer approach for constructing collusion-resistant probabilistic dynamic traitor tracing schemes with larger alphabets from schemes with smaller alphabets. This construction offers a linear tradeoff between the alphabet size and the codelength. In particular, we show that applying our results to the binary dynamic Tardos scheme of leads to schemes that are shorter by a factor equal to half the alphabet size. Asymptotically, these codelengths correspond, up to a constant factor, to the fingerprinting capacity for static probabilistic schemes. This gives a hierarchy of probabilistic dynamic traitor tracing schemes, and bridges the gap between the low bandwidth, high codelength scheme of and the high bandwidth, low codelength scheme of Fiat and Tassa." ] }
1908.01549
2964399413
Code obfuscation is a major tool for protecting software intellectual property from attacks such as reverse engineering or code tampering. Yet, recently proposed (automated) attacks based on Dynamic Symbolic Execution (DSE) shows very promising results, hence threatening software integrity. Current defenses are not fully satisfactory, being either not efficient against symbolic reasoning, or affecting runtime performance too much, or being too easy to spot. We present and study a new class of anti-DSE protections coined as path-oriented protections targeting the weakest spot of DSE, namely path exploration. We propose a lightweight, efficient, resistant and analytically proved class of obfuscation algorithms designed to hinder DSE-based attacks. Extensive evaluation demonstrates that these approaches critically counter symbolic deobfuscation while yielding only a very slight overhead.
@cite_36 set the ground for the experimental evaluation of symbolic deobfuscation techniques. Our own experimental evaluation extends and refines their method in several ways: new metrics, different DSE settings, larger examples. @cite_43 propose a mathematically proven obfuscation against Abstract Model Checking attacks.
{ "cite_N": [ "@cite_36", "@cite_43" ], "mid": [ "2620363344", "2560252021", "2010417554", "2751013686" ], "abstract": [ "Control flow obfuscation techniques can be used to hinder software reverse-engineering. Symbolic analysis can counteract these techniques, but only if they can analyze obfuscated conditional statements. We evaluate the use of dynamic synthesis to complement symbolic analysis in the analysis of obfuscated conditionals. We test this approach on the taint-analysis-resistant Mixed Boolean Arithmetics (MBA) obfuscation method that is commonly used to obfuscate and randomly diversify statements. We experimentally ascertain the practical feasibility of MBA obfuscation. We study using SMT-based approaches with different state-of-the-art SMT solvers to counteract MBA obfuscation, and we show how targeted algebraic simplification can greatly reduce the analysis time. We show that synthesis-based deobfuscation is more effective than current SMT-based deobfuscation algorithms, thus proposing a synthesis-based attacker model to complement existing attacker models.", "Code obfuscation is widely used by software developers to protect intellectual property, and malware writers to hamper program analysis. However, there seems to be little work on systematic evaluations of effectiveness of obfuscation techniques against automated program analysis. The result is that we have no methodical way of knowing what kinds of automated analyses an obfuscation method can withstand. This paper addresses the problem of characterizing the resilience of code obfuscation transformations against automated symbolic execution attacks, complementing existing works that measure the potency of obfuscation transformations against human-assisted attacks through user studies. We evaluated our approach over 5000 different C programs, which have each been obfuscated using existing implementations of obfuscation transformations. The results show that many existing obfuscation transformations, such as virtualization, stand little chance of withstanding symbolic-execution based deobfuscation. A crucial and perhaps surprising observation we make is that symbolic-execution based deobfuscators can easily deobfuscate transformations that preserve program semantics. On the other hand, we present new obfuscation transformations that change program behavior in subtle yet acceptable ways, and show that they can render symbolic-execution based deobfuscation analysis ineffective in practice.", "Symbolic and concolic execution nd important applications in a number of security-related program analyses, including analysis of malicious code. However, malicious code tend to very often be obfuscated, and current concolic analysis techniques have trouble dealing with some of these obfuscations, leading to imprecision and or excessive resource usage. This paper discusses three such obfuscations: two of these are already found in obfuscation tools used by malware, while the third is a simple variation on an existing obfuscation technique. We show empirically that existing symbolic analyses are not robust against such obfuscations, and propose ways in which the problems can be mitigated using a combination of ne-grained bit-level taint analysis and architecture-aware constraint generations. Experimental results indicate that our approach is eective in allowing symbolic and concolic execution to handle such obfuscations.", "Software obfuscation transforms code such that it is more difficult to reverse engineer. However, it is known that given enough resources, an attacker will successfully reverse engineer an obfuscated program. Therefore, an open challenge for software obfuscation is estimating the time an obfuscated program is able to withstand a given reverse engineering attack. This paper proposes a general framework for choosing the most relevant software features to estimate the effort of automated attacks. Our framework uses these software features to build regression models that can predict the resilience of different software protection transformations against automated attacks. To evaluate the effectiveness of our approach, we instantiate it in a case-study about predicting the time needed to deobfuscate a set of C programs, using an attack based on symbolic execution. To train regression models our system requires a large set of programs as input. We have therefore implemented a code generator that can generate large numbers of arbitrarily complex random C functions. Our results show that features such as the number of community structures in the graphrepresentation of symbolic path-constraints, are far more relevant for predicting deobfuscation time than other features generally used to measure the potency of controlflow obfuscation (e.g. cyclomatic complexity). Our best model is able to predict the number of seconds of symbolic execution-based deobfuscation attacks with over 90 accuracy for 80 of the programs in our dataset, which also includes several realistic hash functions." ] }
1908.01449
2966779809
Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and then temporal features sequentially, rather than simultaneously, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches on UCF-101 (while simplifying the training process) and also evaluate on Kinetics for comparison.
. Previous work using data include @cite_13 @cite_32 @cite_29 @cite_4 . Gan al @cite_7 jointly match images and frames in a pre-processing step before using a classifier while LeadExceed @cite_17 uses multiple steps to filter out noisy images and frames. In contrast, our model does not have pre-processing steps and learns to downweight noisy images as part of model training.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_29", "@cite_32", "@cite_13", "@cite_17" ], "mid": [ "2952927437", "2577784528", "2883311563", "2892111734" ], "abstract": [ "We present an approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations. One common approach to combine clean and noisy data is to first pre-train a network using the large noisy dataset and then fine-tune with the clean dataset. We show this approach does not fully leverage the information contained in the clean set. Thus, we demonstrate how to use the clean annotations to reduce the noise in the large dataset before fine-tuning the network using both the clean set and the full set with reduced noise. The approach comprises a multi-task network that jointly learns to clean noisy annotations and to accurately classify images. We evaluate our approach on the recently released Open Images dataset, containing 9 million images, multiple annotations per image and over 6000 unique classes. For the small clean set of annotations we use a quarter of the validation set with 40k images. Our results demonstrate that the proposed approach clearly outperforms direct fine-tuning across all major categories of classes in the Open Image dataset. Further, our approach is particularly effective for a large number of classes with medium level of noise in annotations (20-80 false positive annotations).", "We present an approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations. One common approach to combine clean and noisy data is to first pre-train a network using the large noisy dataset and then fine-tune with the clean dataset. We show this approach does not fully leverage the information contained in the clean set. Thus, we demonstrate how to use the clean annotations to reduce the noise in the large dataset before fine-tuning the network using both the clean set and the full set with reduced noise. The approach comprises a multi-task network that jointly learns to clean noisy annotations and to accurately classify images. We evaluate our approach on the recently released Open Images dataset, containing 9 million images, multiple annotations per image and over 6000 unique classes. For the small clean set of annotations we use a quarter of the validation set with 40k images. Our results demonstrate that the proposed approach clearly outperforms direct fine-tuning across all major categories of classes in the Open Image dataset. Further, our approach is particularly effective for a large number of classes with wide range of noise in annotations (20-80 false positive annotations).", "Matching images and sentences demands a fine understanding of both modalities. In this paper, we propose a new system to discriminatively embed the image and text to a shared visual-textual space. In this field, most existing works apply the ranking loss to pull the positive image text pairs close and push the negative pairs apart from each other. However, directly deploying the ranking loss is hard for network learning, since it starts from the two heterogeneous features to build inter-modal relationship. To address this problem, we propose the instance loss which explicitly considers the intra-modal data distribution. It is based on an unsupervised assumption that each image text group can be viewed as a class. So the network can learn the fine granularity from every image text group. The experiment shows that the instance loss offers better weight initialization for the ranking loss, so that more discriminative embeddings can be learned. Besides, existing works usually apply the off-the-shelf features, i.e., word2vec and fixed visual feature. So in a minor contribution, this paper constructs an end-to-end dual-path convolutional network to learn the image and text representations. End-to-end learning allows the system to directly learn from the data and fully utilize the supervision. On two generic retrieval datasets (Flickr30k and MSCOCO), experiments demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language based person retrieval, we improve the state of the art by a large margin. The code has been made publicly available.", "We consider the single image super-resolution problem in a more general case that the low- high-resolution pairs and the down-sampling process are unavailable. Different from traditional super-resolution formulation, the low-resolution input is further degraded by noises and blurring. This complicated setting makes supervised learning and accurate kernel estimation impossible. To solve this problem, we resort to unsupervised learning without paired data, inspired by the recent successful image-to-image translation applications. With generative adversarial networks (GAN) as the basic component, we propose a Cycle-in-Cycle network structure to tackle the problem within three steps. First, the noisy and blurry input is mapped to a noise-free low-resolution space. Then the intermediate image is up-sampled with a pre-trained deep model. Finally, we fine-tune the two modules in an end-to-end manner to get the high-resolution output. Experiments on NTIRE2018 datasets demonstrate that the proposed unsupervised method achieves comparable results as the state-of-the-art supervised models." ] }
1908.01449
2966779809
Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and then temporal features sequentially, rather than simultaneously, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches on UCF-101 (while simplifying the training process) and also evaluate on Kinetics for comparison.
Li al @cite_21 use web images to perform domain adaptation and learn a video classifier, but they manually filter out irrelevant web images beforehand whereas we incorporate this step into our model.
{ "cite_N": [ "@cite_21" ], "mid": [ "2122084318", "2107250100", "2031691729", "2363300041" ], "abstract": [ "Recent work has demonstrated the effectiveness of domain adaptation methods for computer vision applications. In this work, we propose a new multiple source domain adaptation method called Domain Selection Machine (DSM) for event recognition in consumer videos by leveraging a large number of loosely labeled web images from different sources (e.g., Flickr.com and Photosig.com), in which there are no labeled consumer videos. Specifically, we first train a set of SVM classifiers (referred to as source classifiers) by using the SIFT features of web images from different source domains. We propose a new parametric target decision function to effectively integrate the static SIFT features from web images video keyframes and the spacetime (ST) features from consumer videos. In order to select the most relevant source domains, we further introduce a new data-dependent regularizer into the objective of Support Vector Regression (SVR) using the ∊-insensitive loss, which enforces the target classifier shares similar decision values on the unlabeled consumer videos with the selected source classifiers. Moreover, we develop an alternating optimization algorithm to iteratively solve the target decision function and a domain selection vector which indicates the most relevant source domains. Extensive experiments on three real-world datasets demonstrate the effectiveness of our proposed method DSM over the state-of-the-art by a performance gain up to 46.41 .", "Most current image categorization methods require large collections of manually annotated training examples to learn accurate visual recognition models. The time-consuming human labeling effort effectively limits these approaches to recognition problems involving a small number of different object classes. In order to address this shortcoming, in recent years several authors have proposed to learn object classifiers from weakly-labeled Internet images, such as photos retrieved by keyword-based image search engines. While this strategy eliminates the need for human supervision, the recognition accuracies of these methods are considerably lower than those obtained with fully-supervised approaches, because of the noisy nature of the labels associated to Web data. In this paper we investigate and compare methods that learn image classifiers by combining very few manually annotated examples (e.g., 1-10 images per class) and a large number of weakly-labeled Web photos retrieved using keyword-based image search. We cast this as a domain adaptation problem: given a few strongly-labeled examples in a target domain (the manually annotated examples) and many source domain examples (the weakly-labeled Web photos), learn classifiers yielding small generalization error on the target domain. Our experiments demonstrate that, for the same number of strongly-labeled examples, our domain adaptation approach produces significant recognition rate improvements over the best published results (e.g., 65 better when using 5 labeled training examples per class) and that our classifiers are one order of magnitude faster to learn and to evaluate than the best competing method, despite our use of large weakly-labeled data sets.", "We study the use of domain adaptation and transfer learning techniques as part of a framework for adaptive object detection. Unlike recent applications of domain adaptation work in computer vision, which generally focus on image classification, we explore the problem of extreme class imbalance present when performing domain adaptation for object detection. The main difficulty caused by this imbalance is that test images contain millions or billions of negative image subwindows but just a few image subwindows containing positive instances, which makes it difficult to adapt to changes in the positive classes present new domains by simple techniques such as random sampling. We propose an initial approach to addressing this problem and apply our technique to vehicle detection in a challenging urban surveillance dataset, demonstrating the performance of our approach with various amounts of supervision, including the fully unsupervised case.", "One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem." ] }
1908.01449
2966779809
Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and then temporal features sequentially, rather than simultaneously, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches on UCF-101 (while simplifying the training process) and also evaluate on Kinetics for comparison.
There has also been related work in using attention for weakly-supervised learning. Zhuang al @cite_24 stack noisy web image features together with the assumption that at least one of the images is correctly labeled. They then learn an attention model to focus on the correctly labeled images. UntrimmedNet @cite_4 generates clip proposals from untrimmed web videos and also incorporates an attention component for focusing on the proposals with the correct action. In contrast, our model learns from both images and videos and ties attention closely with domain adaptation.
{ "cite_N": [ "@cite_24", "@cite_4" ], "mid": [ "2951260882", "2503388974", "2776207810", "2410323755" ], "abstract": [ "We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images.", "We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.", "We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.", "Attention mechanisms have recently been introduced in deep learning for various tasks in natural language processing and computer vision. But despite their popularity, the \"correctness\" of the implicitly-learned attention maps has only been assessed qualitatively by visualization of several examples. In this paper we focus on evaluating and improving the correctness of attention in neural image captioning models. Specifically, we propose a quantitative evaluation metric for the consistency between the generated attention maps and human annotations, using recently released datasets with alignment between regions in images and entities in captions. We then propose novel models with different levels of explicit supervision for learning attention maps during training. The supervision can be strong when alignment between regions and caption entities are available, or weak when only object segments and categories are provided. We show on the popular Flickr30k and COCO datasets that introducing supervision of attention maps during training solidly improves both attention correctness and caption quality, showing the promise of making machine perception more human-like." ] }
1908.01449
2966779809
Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and then temporal features sequentially, rather than simultaneously, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches on UCF-101 (while simplifying the training process) and also evaluate on Kinetics for comparison.
. 3D-CNN video models such as C3D @cite_27 , P3D @cite_3 , I3D @cite_9 , @cite_30 are appealing for video classification since they learn appearance and motion features jointly. I3D @cite_9 uses full 3D filters, while @cite_30 and P3D @cite_3 decompose the spatio-temporal convolution into a spatial convolution followed by a temporal convolution. The design of our 3D-CNN is partly inspired by these latter approaches because of this elegant decomposition, which allows us to reuse spatial filters from a conventional 2D CNN. We could potentially use the same bootstrapping technique to inflate 2D to 3D filters as in I3D @cite_9 , but initializing and fixing the 2D filters may allow for easier training (more detail in ).
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_27", "@cite_3" ], "mid": [ "2761659801", "2963820951", "2883429621", "2772114784" ], "abstract": [ "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24).", "In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51." ] }
1908.01449
2966779809
Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and then temporal features sequentially, rather than simultaneously, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches on UCF-101 (while simplifying the training process) and also evaluate on Kinetics for comparison.
. There has been much work in adapting GANs @cite_11 for domain adaptation. Models such as PixelDA @cite_1 learn to generate realistic-looking samples from the source distribution, while others such as DANN @cite_5 learn a domain-invariant feature representation. We adopt this latter approach in our work. Other related works include Adversarial Discriminative Domain Adaptation (ADDA) @cite_12 which learns a piecewise model by pre-training a classifier on the source domain and then adds the adversarial component later. Tzeng et. al @cite_26 learn domain-invariance by incorporating a domain confusion loss (similar to a discriminator loss) and transferring class correlations between domains to preserve class-specific information. Luo al @cite_28 propose a similar model to ours but for the supervised setting, and add a semantic-transfer loss to encourage transfer of class-specific information.
{ "cite_N": [ "@cite_26", "@cite_28", "@cite_1", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2962970380", "2949212125", "2593768305", "2767382337" ], "abstract": [ "Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on several visual domain adaptation benchmarks.", "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning." ] }
1908.01449
2966779809
Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and then temporal features sequentially, rather than simultaneously, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches on UCF-101 (while simplifying the training process) and also evaluate on Kinetics for comparison.
The main difference between our model and these approaches is that we use data and assume the source and target domains may contain noisy labels, which is a considerably more difficult yet practical scenario. Lastly there is recent work by Zhang al @cite_22 that is similar to our model in that they also have a domain-adversarial component and perform instance weighting to account for noise in the source data. However they use a dual-discriminator approach for instance weighting whereas we use an attention-based component. In addition our model is designed specifically for image to video domain adaptation and classification.
{ "cite_N": [ "@cite_22" ], "mid": [ "2240559667", "2767382337", "2964139811", "2786081970" ], "abstract": [ "In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http: www.yongxu.org lunwen.html .", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "While domain adaptation has been actively researched in recent years, most theoretical results and algorithms focus on the single-source-single-target adaptation setting. Naive application of such algorithms on multiple source domain adaptation problem may lead to suboptimal solutions. We propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances. Compared with existing bounds, the new bound does not require expert knowledge about the target distribution, nor the optimal combination rule for multisource domains. Interestingly, our theory also leads to an efficient learning strategy using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple domain shifts while still being discriminative for the learning task. To this end, we propose two models, both of which we call multisource domain adversarial networks (MDANs): the first model optimizes directly our bound, while the second model is a smoothed approximation of the first one, leading to a more data-efficient and task-adaptive model. The optimization tasks of both models are minimax saddle point problems that can be optimized by adversarial training. To demonstrate the effectiveness of MDANs, we conduct extensive experiments showing superior adaptation performance on three real-world datasets: sentiment analysis, digit classification, and vehicle counting." ] }
1908.01441
2965360919
A partial edge drawing (PED) of a graph is a variation of a node-link diagram. PED draws a link, which is a partial visual representation of an edge, and reduces visual clutter of the node-link diagram. However, more time is required to read a PED to infer undrawn parts. The authors propose a morphing edge drawing (MED), which is a PED that changes with time. In MED, links morph between partial and complete drawings; thus, a reduced load for estimation of undrawn parts in a PED is expected. Herein, a formalization of MED is shown based on a formalization of PED. Then, requirements for the scheduling of morphing are specified. The requirements inhibit morphing from crossing and shorten the overall time for morphing the edges. Moreover, an algorithm for a scheduling method implemented by the authors is illustrated and the effectiveness of PED from a reading time viewpoint is shown through an experimental evaluation.
conceived a drawing concept in which only half the links are used to reduce the visual clutter during the development of a tool called SeeNet @cite_13 . Parallel Tagcloud, developed by , adopts a method similar to PED @cite_11 . Although Parallel Tagcloud is an extension of Tag Cloud, it can be regarded as a hierarchical layout of directed graphs; thus, it is useful against visual clutter caused by crossing of links. They can avoid drawing intersections by representing links as straight lines without drawing in the middle of the links.
{ "cite_N": [ "@cite_13", "@cite_11" ], "mid": [ "2168788994", "1968305696", "2107034998", "105197999" ], "abstract": [ "We investigate the readability of node-link diagrams for directed graphs when using partially drawn links instead of showing each link explicitly in its full length. Providing the complete link information between related nodes in a graph can lead to visual clutter caused by many edge crossings. To reduce visual clutter, we draw only partial links. Then, the question arises if such diagrams are still readable, understandable, and interpretable. As a step toward answering this question, we conducted a controlled user experiment with 42 participants to uncover differences in accuracy and completion time for three different tasks: identifying the existence of a direct link, the existence of an indirect connection with one intermediate node, and the node with the largest number of outgoing edges. Furthermore, we compared tapered and traditional edge representations, three different graph sizes, and six different link lengths. In all configurations, the nodes of the graph were placed according to the force-directed layout by Fruchterman and Reingold. One result of this study is that the characteristics of completion times and error rates depend on the type of task. A general observation is that partially drawn links can lead to shorter task completion times, which occurs for nearly all graph sizes, tasks, and both tapered and traditional edge representations. In contrast, there is a tendency toward higher error rates for shorter links, which in fact is task-dependent.", "We present the H3 layout technique for drawing large directed graphs as node-link diagrams in 3D hyperbolic space. We can lay out much larger structures than can be handled using traditional techniques for drawing general graphs because we assume a hierarchical nature of the data. We impose a hierarchy on the graph by using domain-specific knowledge to find an appropriate spanning tree. Links which are not part of the spanning tree do not influence the layout but can be selectively drawn by user request. The volume of hyperbolic 3-space increases exponentially, as opposed to the familiar geometric increase of euclidean 3-space. We exploit this exponential amount of room by computing the layout according to the hyperbolic metric. We optimize the cone tree layout algorithm for 3D hyperbolic space by placing children on a hemisphere around the cone mouth instead of on its perimeter. Hyperbolic navigation affords a Focus+Context view of the structure with minimal visual clutter. We have successfully laid out hierarchies of over 20,000 nodes. Our implementation accommodates navigation through graphs too large to be rendered interactively by allowing the user to explicitly prune or expand subtrees.", "Drawing graphs as nodes connected by links is visually compelling but computationally difficult. Hyperbolic space and spanning trees can reduce visual clutter, speed up layout, and provide fluid interaction. This article briefly describes a software system that explicitly attempts to handle much larger graphs than previous systems and support dynamic exploration rather than final presentation. It then discusses the applicability of this system to goals beyond simple exploration. A software system that supports graph exploration should include both a layout and an interactive drawing component. I have developed new algorithms for both layout and drawing (H3 and H3Viewer). The H3Viewer drawing algorithm remains under development, so this article presents preliminary results. I have implemented a software library that uses these algorithms. It can handle graphs of more than 100,000 edges by using a spanning tree as the backbone for the layout and drawing algorithms.", "With the emergence of social tagging systems and the possibility for users to extensively annotate web resources and any content enormous amounts of unordered information and user generated metadata circulate the Web. Accordingly a viable visualisation form needs to integrate this unclassified content into meaningful visual representations. We argue that tag clouds can make the grade. We assume that the application of clustering techniques for arranging tags can be a useful method to generate meaningful units within a tag cloud. We think that clustered tag clouds can potentially help to enhance user performance. In this paper we present a description of tag clouds including a theoretical discourse on the strengths and weaknesses of using them in common Web-based contexts. Further recent methods of semantic clustering for visualizing tag clouds are reviewed. Findings from user studies that investigated the visual perception of differently arranged depictions of tags follow. The main objective consists in the exploration of characteristical aspects in perceptual phenomenons and cognitive processes during the interaction with a tag cloud. This clears the way for useful implications on the constitution and design factors of that visualisation form. Finally a new approach is proposed in order to further develop on this concept." ] }
1908.01441
2965360919
A partial edge drawing (PED) of a graph is a variation of a node-link diagram. PED draws a link, which is a partial visual representation of an edge, and reduces visual clutter of the node-link diagram. However, more time is required to read a PED to infer undrawn parts. The authors propose a morphing edge drawing (MED), which is a PED that changes with time. In MED, links morph between partial and complete drawings; thus, a reduced load for estimation of undrawn parts in a PED is expected. Herein, a formalization of MED is shown based on a formalization of PED. Then, requirements for the scheduling of morphing are specified. The requirements inhibit morphing from crossing and shorten the overall time for morphing the edges. Moreover, an algorithm for a scheduling method implemented by the authors is illustrated and the effectiveness of PED from a reading time viewpoint is shown through an experimental evaluation.
formalized the PED in their study @cite_6 ; our formalization of PED in is a modified version of their formalization. They added continuity of the omitted parts of links as a condition, we have incorporated this into the formalization herein. Moreover, although they focused on a layout without crossing stubs in PED, this study allows stub crossings. applied PED to directed graphs using tapered links @cite_1 . applied PED to weighted graphs by representing weights with edge colors @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6" ], "mid": [ "160524401", "2567569351", "2168788994", "2106268337" ], "abstract": [ "One of the main principles for the effective visualization of graphs is the avoidance of edge crossings. Around this problem, very active research has been performed with works ranging from combinatorics, to algorithmics, visualization effects, to psychological user studies. Recently, the pragmatic approach has been proposed to avoid crossings by drawing the edges only partially. Unfortunately, no formal model and efficient algorithms have been formulated to this end. We introduce the concept for drawings of graphs with partially drawn edges (PED). Therefore we consider graphs with and without given embedding and characterize PEDs with concepts like symmetry and homogeneity. For graphs without embedding we formulate a sufficient condition to guarantee a symmetric homogeneous PED, and identify a nontrivial graph class which has a symmetric homogeneous PED. For graphs with given layout we consider the variants of maximizing the shortest partially drawn edge and the total length respectively.", "Partial Edge Drawing (PED) is a popular graph drawing style aimed at reducing edge crossings and visual clutter. PEDs are straight-line drawings where the central part of each edge is erased, and the length of the two remaining segments are computed so as to preserve useful geometric information. Recent studies on this approach focus on symmetric and δ-homogeneous PEDs (δ-SHPEDs). Given a straight-line drawing, a δ-SHPED of this drawing is immediately defined for a fixed value of δ. In particular, some edge crossings may not be avoidable, although the amount of ink removed from the original drawing might be large (e.g., 50 when δ = 1 4). On the other hand, it is possible to maximize the ink and remove edge crossings by renouncing to homogeneity. We present heuristics to produce symmetric PEDs that are either crossing-free or where crossings forming large angles are allowed. We also describe a user study in which PEDs obtained via our heuristics are compared with the standard model 1 4-SHPED. Our results suggest that the benefit of homogeneity overcomes in terms of readability the benefit of fewer crossings and more ink.", "We investigate the readability of node-link diagrams for directed graphs when using partially drawn links instead of showing each link explicitly in its full length. Providing the complete link information between related nodes in a graph can lead to visual clutter caused by many edge crossings. To reduce visual clutter, we draw only partial links. Then, the question arises if such diagrams are still readable, understandable, and interpretable. As a step toward answering this question, we conducted a controlled user experiment with 42 participants to uncover differences in accuracy and completion time for three different tasks: identifying the existence of a direct link, the existence of an indirect connection with one intermediate node, and the node with the largest number of outgoing edges. Furthermore, we compared tapered and traditional edge representations, three different graph sizes, and six different link lengths. In all configurations, the nodes of the graph were placed according to the force-directed layout by Fruchterman and Reingold. One result of this study is that the characteristics of completion times and error rates depend on the type of task. A general observation is that partially drawn links can lead to shorter task completion times, which occurs for nearly all graph sizes, tasks, and both tapered and traditional edge representations. In contrast, there is a tendency toward higher error rates for shorter links, which in fact is task-dependent.", "We present a novel dynamic graph visualization technique based on node-link diagrams. The graphs are drawn side-byside from left to right as a sequence of narrow stripes that are placed perpendicular to the horizontal time line. The hierarchically organized vertices of the graphs are arranged on vertical, parallel lines that bound the stripes; directed edges connect these vertices from left to right. To address massive overplotting of edges in huge graphs, we employ a splatting approach that transforms the edges to a pixel-based scalar field. This field represents the edge densities in a scalable way and is depicted by non-linear color mapping. The visualization method is complemented by interaction techniques that support data exploration by aggregation, filtering, brushing, and selective data zooming. Furthermore, we formalize graph patterns so that they can be interactively highlighted on demand. A case study on software releases explores the evolution of call graphs extracted from the JUnit open source software project. In a second application, we demonstrate the scalability of our approach by applying it to a bibliography dataset containing more than 1.5 million paper titles from 60 years of research history producing a vast amount of relations between title words." ] }
1908.01441
2965360919
A partial edge drawing (PED) of a graph is a variation of a node-link diagram. PED draws a link, which is a partial visual representation of an edge, and reduces visual clutter of the node-link diagram. However, more time is required to read a PED to infer undrawn parts. The authors propose a morphing edge drawing (MED), which is a PED that changes with time. In MED, links morph between partial and complete drawings; thus, a reduced load for estimation of undrawn parts in a PED is expected. Herein, a formalization of MED is shown based on a formalization of PED. Then, requirements for the scheduling of morphing are specified. The requirements inhibit morphing from crossing and shorten the overall time for morphing the edges. Moreover, an algorithm for a scheduling method implemented by the authors is illustrated and the effectiveness of PED from a reading time viewpoint is shown through an experimental evaluation.
performed a comparison between CED They call it the traditional straight-line model (TRA). and 1 4-SHPED on reading performance of graphs @cite_0 . Although the statistical significance was not shown, from their chart that visualizes the experimental results, in the task of reading graphs (for adjacency check of two nodes or search for adjacent nodes), we can guess that 1 4-SHPED is slightly more accurate than CED; however, the response time of the latter is longer. conducted a more detailed evaluation to reveal that SHPED has high accuracy in reading graphs within SPED @cite_5 . Burch examined the effect of stub orientation and length on graph reading accuracy @cite_10 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "2168788994", "2116856566", "2005098805", "2567569351" ], "abstract": [ "We investigate the readability of node-link diagrams for directed graphs when using partially drawn links instead of showing each link explicitly in its full length. Providing the complete link information between related nodes in a graph can lead to visual clutter caused by many edge crossings. To reduce visual clutter, we draw only partial links. Then, the question arises if such diagrams are still readable, understandable, and interpretable. As a step toward answering this question, we conducted a controlled user experiment with 42 participants to uncover differences in accuracy and completion time for three different tasks: identifying the existence of a direct link, the existence of an indirect connection with one intermediate node, and the node with the largest number of outgoing edges. Furthermore, we compared tapered and traditional edge representations, three different graph sizes, and six different link lengths. In all configurations, the nodes of the graph were placed according to the force-directed layout by Fruchterman and Reingold. One result of this study is that the characteristics of completion times and error rates depend on the type of task. A general observation is that partially drawn links can lead to shorter task completion times, which occurs for nearly all graph sizes, tasks, and both tapered and traditional edge representations. In contrast, there is a tendency toward higher error rates for shorter links, which in fact is task-dependent.", "In this paper, we present the results of a human-computer interaction experiment that compared the performance of the animation of dynamic graphs to the presentation of small multiples and the effect that mental map preservation had on the two conditions. Questions used in the experiment were selected to test both local and global properties of graph evolution over time. The data sets used in this experiment were derived from standard benchmark data sets of the information visualization community. We found that small multiples gave significantly faster performance than animation overall and for each of our five graph comprehension tasks. In addition, small multiples had significantly more errors than animation for the tasks of determining sets of nodes or edges added to the graph during the same timeslice, although a positive time-error correlation coefficient suggests that, in this case, faster responses did not lead to more errors. This result suggests that, for these two tasks, animation is preferable if accuracy is more important than speed. Preserving the mental map under either the animation or the small multiples condition had little influence in terms of error rate and response time.", "Many recommendation and retrieval tasks can be represented as proximity queries on a labeled directed graph, with typed nodes representing documents, terms, and metadata, and labeled edges representing the relationships between them. Recent work has shown that the accuracy of the widely-used random-walk-based proximity measures can be improved by supervised learning - in particular, one especially effective learning technique is based on Path-Constrained Random Walks (PCRW), in which similarity is defined by a learned combination of constrained random walkers, each constrained to follow only a particular sequence of edge labels away from the query nodes. The PCRW based method significantly outperformed unsupervised random walk based queries, and models with learned edge weights. Unfortunately, PCRW query systems are expensive to evaluate. In this study we evaluate the use of approximations to the computation of the PCRW distributions, including fingerprinting, particle filtering, and truncation strategies. In experiments on several recommendation and retrieval problems using two large scientific publications corpora we show speedups of factors of 2 to 100 with little loss in accuracy.", "Partial Edge Drawing (PED) is a popular graph drawing style aimed at reducing edge crossings and visual clutter. PEDs are straight-line drawings where the central part of each edge is erased, and the length of the two remaining segments are computed so as to preserve useful geometric information. Recent studies on this approach focus on symmetric and δ-homogeneous PEDs (δ-SHPEDs). Given a straight-line drawing, a δ-SHPED of this drawing is immediately defined for a fixed value of δ. In particular, some edge crossings may not be avoidable, although the amount of ink removed from the original drawing might be large (e.g., 50 when δ = 1 4). On the other hand, it is possible to maximize the ink and remove edge crossings by renouncing to homogeneity. We present heuristics to produce symmetric PEDs that are either crossing-free or where crossings forming large angles are allowed. We also describe a user study in which PEDs obtained via our heuristics are compared with the standard model 1 4-SHPED. Our results suggest that the benefit of homogeneity overcomes in terms of readability the benefit of fewer crossings and more ink." ] }
1908.01441
2965360919
A partial edge drawing (PED) of a graph is a variation of a node-link diagram. PED draws a link, which is a partial visual representation of an edge, and reduces visual clutter of the node-link diagram. However, more time is required to read a PED to infer undrawn parts. The authors propose a morphing edge drawing (MED), which is a PED that changes with time. In MED, links morph between partial and complete drawings; thus, a reduced load for estimation of undrawn parts in a PED is expected. Herein, a formalization of MED is shown based on a formalization of PED. Then, requirements for the scheduling of morphing are specified. The requirements inhibit morphing from crossing and shorten the overall time for morphing the edges. Moreover, an algorithm for a scheduling method implemented by the authors is illustrated and the effectiveness of PED from a reading time viewpoint is shown through an experimental evaluation.
avoided using arrows to facilitate grasping high-dimensional transitions in the state transition diagram and proposed moving the dashed pattern with animation @cite_4 . compared the recognition accuracy of graphs in various edge drawing methods, such as tapered links and curved links, including animation @cite_7 . They showed that the recognition accuracy of the graph is high by representation using animation. attempted to extend the design space using animation of edge textures @cite_12 . The proposal herein can be considered as an application of animation to graph drawing, especially to drawing edges. However, the purpose is not to express the orientation, but to improve the reading accuracy and efficiency for graphs.
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_7" ], "mid": [ "2108611942", "2116856566", "2168788994", "1970022459" ], "abstract": [ "This paper presents a novel recognition framework which is based on matching shock graphs of 2D shape outlines, where the distance between two shapes is defined to be the cost of the least action path deforming one shape to another. Three key ideas render the implementation of this framework practical. First, the shape space is partitioned by defining an equivalence class on shapes, where two shapes with the same shock graph topology are considered to be equivalent. Second, the space of deformations is discretized by defining all deformations with the same sequence of shock graph transitions as equivalent. Shock transitions are points along the deformation where the shock graph topology changes. Third, we employ a graph edit distance algorithm that searches in the space of all possible transition sequences and finds the globally optimal sequence in polynomial time. The effectiveness of the proposed technique in the presence of a variety of visual transformations including occlusion, articulation and deformation of parts, shadow and highlights, viewpoint variation, and boundary perturbations is demonstrated. Indexing into two separate databases of roughly 100 shapes results in accuracy for top three matches and for the next three matches.", "In this paper, we present the results of a human-computer interaction experiment that compared the performance of the animation of dynamic graphs to the presentation of small multiples and the effect that mental map preservation had on the two conditions. Questions used in the experiment were selected to test both local and global properties of graph evolution over time. The data sets used in this experiment were derived from standard benchmark data sets of the information visualization community. We found that small multiples gave significantly faster performance than animation overall and for each of our five graph comprehension tasks. In addition, small multiples had significantly more errors than animation for the tasks of determining sets of nodes or edges added to the graph during the same timeslice, although a positive time-error correlation coefficient suggests that, in this case, faster responses did not lead to more errors. This result suggests that, for these two tasks, animation is preferable if accuracy is more important than speed. Preserving the mental map under either the animation or the small multiples condition had little influence in terms of error rate and response time.", "We investigate the readability of node-link diagrams for directed graphs when using partially drawn links instead of showing each link explicitly in its full length. Providing the complete link information between related nodes in a graph can lead to visual clutter caused by many edge crossings. To reduce visual clutter, we draw only partial links. Then, the question arises if such diagrams are still readable, understandable, and interpretable. As a step toward answering this question, we conducted a controlled user experiment with 42 participants to uncover differences in accuracy and completion time for three different tasks: identifying the existence of a direct link, the existence of an indirect connection with one intermediate node, and the node with the largest number of outgoing edges. Furthermore, we compared tapered and traditional edge representations, three different graph sizes, and six different link lengths. In all configurations, the nodes of the graph were placed according to the force-directed layout by Fruchterman and Reingold. One result of this study is that the characteristics of completion times and error rates depend on the type of task. A general observation is that partially drawn links can lead to shorter task completion times, which occurs for nearly all graph sizes, tasks, and both tapered and traditional edge representations. In contrast, there is a tendency toward higher error rates for shorter links, which in fact is task-dependent.", "It has been known for some time that larger graphs can be interpreted if laid out in 3D and displayed with stereo and or motion depth cues to support spatial perception. However, prior studies were carried out using displays that provided a level of detail far short of what the human visual system is capable of resolving. Therefore, we undertook a graph comprehension study using a very high resolution stereoscopic display. In our first experiment, we examined the effect of stereoscopic display, kinetic depth, and using 3D tubes versus lines to display the links. The results showed a much greater benefit for 3D viewing than previous studies. For example, with both motion and stereoscopic depth cues, unskilled observers could see paths between nodes in 333 node graphs with less than a 10p error rate. Skilled observers could see up to a 1000-node graph with less than a 10p error rate. This represented an order of magnitude increase over 2D display. In our second experiment, we varied both nodes and links to understand the constraints on the number of links and the size of graph that can be reliably traced. We found the difference between number of links and number of nodes to best account for error rates and suggest that this is evidence for a “perceptual phase transition.” These findings are discussed in terms of their implications for information display." ] }
1908.01379
2964814808
Depth acquisition, based on active illumination, is essential for autonomous and robotic navigation. LiDARs (Light Detection And Ranging) with mechanical, fixed, sampling templates are commonly used in today's autonomous vehicles. An emerging technology, based on solid-state depth sensors, with no mechanical parts, allows fast, adaptive, programmable scans. In this paper, we investigate the topic of adaptive, image-driven, sampling and reconstruction strategies. First, we formulate a piece-wise linear depth model with several tolerance parameters and estimate its validity for indoor and outdoor scenes. Our model and experiments predict that, in the optimal case, about 20-60 piece-wise linear structures can approximate well a depth map. This translates to a depth-to-image sampling ratio of about 1 1200. We propose a simple, generic, sampling and reconstruction algorithm, based on super-pixels. We reach a sampling rate which is still far from the optimal case. However, our sampling improves grid and random sampling, consistently, for a wide variety of reconstruction methods. Moreover, our proposed reconstruction achieves state-of-the-art results, compared to image-guided depth completion algorithms, reducing the required sampling rate by a factor of 3-4. A single-pixel depth camera built in our lab illustrates the concept.
Among the unguided methods, some use classical approach @cite_27 @cite_33 , while others rely on more advanced tools such as deep learning @cite_13 @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_13", "@cite_33" ], "mid": [ "35390552", "2951159816", "2045254372", "1527783480" ], "abstract": [ "This paper investigates, several methods for coping with inconsistency caused by multiple source information by introducing suitable consequence relations capable of inferring non trivial conclusions from an inconsistent stratified knowledge base. Some of these methods presuppose a revision step, namely a selection of one or several consistent subsets of formulas, and then classical inference is used for inferring from these subsets. Two alternative methods that do not require any revision step are studied: inference based on arguments and a new approach called safely supported inference, where inconsistency is kept local. These two last methods look suitable when the inconsistency is due to the presence of several sources of information. The paper offers a comparative study of the various inference modes under inconsistency.", "We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.", "We use logical inference techniques for recognising textual entailment. As the performance of theorem proving turns out to be highly dependent on not readily available background knowledge, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Finally, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap; the resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the different techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.", "We present three approaches for unsupervised grammar induction that are sensitive to data complexity and apply them to Klein and Manning's Dependency Model with Valence. The first, Baby Steps, bootstraps itself via iterated learning of increasingly longer sentences and requires no initialization. This method substantially exceeds Klein and Manning's published scores and achieves 39.4 accuracy on Section 23 (all sentences) of the Wall Street Journal corpus. The second, Less is More, uses a low-complexity subset of the available data: sentences up to length 15. Focusing on fewer but simpler examples trades off quantity against ambiguity; it attains 44.1 accuracy, using the standard linguistically-informed prior and batch training, beating state-of-the-art. Leapfrog, our third heuristic, combines Less is More with Baby Steps by mixing their models of shorter sentences, then rapidly ramping up exposure to the full training set, driving up accuracy to 45.0 . These trends generalize to the Brown corpus; awareness of data complexity may improve other parsing models and unsupervised algorithms." ] }
1908.01379
2964814808
Depth acquisition, based on active illumination, is essential for autonomous and robotic navigation. LiDARs (Light Detection And Ranging) with mechanical, fixed, sampling templates are commonly used in today's autonomous vehicles. An emerging technology, based on solid-state depth sensors, with no mechanical parts, allows fast, adaptive, programmable scans. In this paper, we investigate the topic of adaptive, image-driven, sampling and reconstruction strategies. First, we formulate a piece-wise linear depth model with several tolerance parameters and estimate its validity for indoor and outdoor scenes. Our model and experiments predict that, in the optimal case, about 20-60 piece-wise linear structures can approximate well a depth map. This translates to a depth-to-image sampling ratio of about 1 1200. We propose a simple, generic, sampling and reconstruction algorithm, based on super-pixels. We reach a sampling rate which is still far from the optimal case. However, our sampling improves grid and random sampling, consistently, for a wide variety of reconstruction methods. Moreover, our proposed reconstruction achieves state-of-the-art results, compared to image-guided depth completion algorithms, reducing the required sampling rate by a factor of 3-4. A single-pixel depth camera built in our lab illustrates the concept.
On the contrary, guided methods exploit the connection between depth maps and their corresponding color image. Earlier methods used traditional image processing tools @cite_11 @cite_15 . Recently, several deep learning-based methods @cite_26 @cite_25 @cite_4 @cite_17 @cite_3 @cite_30 @cite_14 @cite_1 achieved state-of-the-art results.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_14", "@cite_1", "@cite_17", "@cite_3", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "15713842", "2471239744", "2963395931", "2557414982" ], "abstract": [ "In this paper, we propose to conduct inpainting and upsampling for defective depth maps when aligned color images are given. These tasks are referred to as the guided depth enhancement problem. We formulate the problem based on the heat diffusion framework. The pixels with known depth values are treated as the heat sources and the depth enhancement is performed via diffusing the depth from these sources to unknown regions. The diffusion conductivity is designed in terms of the guidance color image so that a linear anisotropic diffusion problem is formed. We further cast the steady state problem of this diffusion into the famous random walk model, by which the enhancement is achieved efficiently by solving a sparse linear system. The proposed algorithm is quantitatively evaluated on the Middlebury stereo dataset and is applied to inpaint Kinect data and upsample Lidar's range data. Comparisons to the commonly used bilateral filter and Markov Random Field based methods are also presented, showing that our algorithm is competent.", "Inferring the underlying depth map from a single image is an ill-posed and inherently ambiguous problem. State-of-the-art deep learning methods can now estimate accurate depth maps, but when projected into 3D, still lack local detail and are often highly distorted. We propose a multi-scale convolution neural network to learn from single RGB images fine-scaled depth maps that result in realistic 3D reconstructions. To encourage spatial coherency, we introduce spatial coordinate feature maps and a local relative depth constraint. In our network, the three scales are closely integrated with skip fusion layers, making it highly efficient to train with large-scale data. Experiments on the NYU Depth v2 dataset shows that our depth predictions are not only competitive with state-of-the-art but also leading to 3D reconstructions that are accurate and rich with detail.", "Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https: ivi.fnwi.uva.nl cv retinet.", "Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images." ] }
1908.01379
2964814808
Depth acquisition, based on active illumination, is essential for autonomous and robotic navigation. LiDARs (Light Detection And Ranging) with mechanical, fixed, sampling templates are commonly used in today's autonomous vehicles. An emerging technology, based on solid-state depth sensors, with no mechanical parts, allows fast, adaptive, programmable scans. In this paper, we investigate the topic of adaptive, image-driven, sampling and reconstruction strategies. First, we formulate a piece-wise linear depth model with several tolerance parameters and estimate its validity for indoor and outdoor scenes. Our model and experiments predict that, in the optimal case, about 20-60 piece-wise linear structures can approximate well a depth map. This translates to a depth-to-image sampling ratio of about 1 1200. We propose a simple, generic, sampling and reconstruction algorithm, based on super-pixels. We reach a sampling rate which is still far from the optimal case. However, our sampling improves grid and random sampling, consistently, for a wide variety of reconstruction methods. Moreover, our proposed reconstruction achieves state-of-the-art results, compared to image-guided depth completion algorithms, reducing the required sampling rate by a factor of 3-4. A single-pixel depth camera built in our lab illustrates the concept.
Early guided depth sampling: Despite the intensive development in depth completion, the issue of adaptive sampling is yet little addressed. Only @cite_5 @cite_21 have offered a non-trivial (i.e. uniformly random or grid) sampling pattern as a previous step to depth reconstruction. Both studies selected sampling at locations which are most probable to have strong depth gradient. Nonetheless, they failed dealing with very low sampling budget of less than 5 Nonuniform sampling: Over the years, the field of nonuniform sampling has been well established @cite_31 @cite_7 @cite_22 @cite_24 . However, these studies focus on the reconstruction of the signal for a given nonuniform sampling pattern and not on how to design data-driven patterns, given side information.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_21", "@cite_24", "@cite_5", "@cite_31" ], "mid": [ "2114129195", "2122735658", "2141341167", "2061158652" ], "abstract": [ "We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.", "A constructive solution of the irregular sampling problem for band- limited functions is given. We show how a band-limited function can be com- pletely reconstructed from any random sampling set whose density is higher than the Nyquist rate, and give precise estimates for the speed of convergence of this iteration method. Variations of this algorithm allow for irregular sampling with derivatives, reconstruction of band-limited functions from local averages, and irregular sampling of multivariate band-limited functions. In the irregular sampling problem one is asked whether and how a band- limited function can be completely reconstructed from its irregularly sam- pled values f(xi). This has many applications in signal and image processing, seismology, meteorology, medical imaging, etc. Finding constructive solutions of this problem has received considerable attention among mathematicians and engineers. The mathematical literature provides several uniqueness results (1, 2, 17, 18, 19). It is now part of the folklore that for stable sampling the sampling rate must be at least the Nyquist rate (18). These results, as deep as they are, have had little impact for the applied sciences, because they were not constructive. If the sampling set is just a perturbation of the regular oversampling, then a reconstruction method has been obtained in a seminal paper by Duffin and Schaeffer (6) (see also (29)): if for some L > 0, a > 0, and o > 0 the sampling points xk , k e Z , satisfy (a) - ok a, k ^ I, then the norm equivalence A iR (x) ) with w < n o. This norm equivalence implies that it is possible to reconstruct through an iterative procedure, the so-called frame method. Most of the later work on constructive methods consists of variations of this method (3, 21, 22, 26). The above conditions on the sampling set exclude random irregular sampling sets, e.g., sets with regions of higher sampling density. A partial, but undesirable remedy, to handle highly irregular sampling sets, would be to force the above conditions by throwing away information on part of the points and accept a very slow convergence of the iteration.", "We examine the question of reconstruction of signals from periodic nonuniform samples. This involves discarding samples from a uniformly sampled signal in some periodic fashion. We give a characterization of the signals that can be reconstructed at exactly the minimum rate once a nonuniform sampling pattern has been fixed. We give an implicit characterization of the reconstruction system, and a design method by which the ideal reconstruction filters may be approximated. We demonstrate that for certain spectral supports the minimum rate can be approached or achieved using reconstruction schemes of much lower complexity than those arrived at by using spectral slicing, as in earlier work. Previous work on multiband signals have typically been those for which restrictive assumptions on the sizes and positions of the bands have been made, or where the minimum rate was approached asymptotically. We show that the class of multiband signals which can be reconstructed exactly is shown to be far larger than previously considered. When approaching the minimum rate, this freedom allows us, in certain cases to have a far less complex reconstruction system.", "We study the problem of optimal sub-Nyquist sampling for perfect reconstruction of multiband signals. The signals are assumed to have a known spectral support spl Fscr that does not tile under translation. Such signals admit perfect reconstruction from periodic nonuniform sampling at rates approaching Landau's (1967) lower bound equal to the measure of spl Fscr . For signals with sparse spl Fscr , this rate can be much smaller than the Nyquist rate. Unfortunately the reduced sampling rates afforded by this scheme can be accompanied by increased error sensitivity. In a previous study, we derived bounds on the error due to mismodeling and sample additive noise. Adopting these bounds as performance measures, we consider the problems of optimizing the reconstruction sections of the system, choosing the optimal base sampling rate, and designing the nonuniform sampling pattern. We find that optimizing these parameters can improve system performance significantly. Furthermore, uniform sampling is optimal for signals with spl Fscr that tiles under translation. For signals with nontiling spl Fscr , which are not amenable to efficient uniform sampling, the results reveal increased error sensitivities with sub-Nyquist sampling. However, these can be controlled by optimal design, demonstrating the potential for practical multifold reductions in sampling rate." ] }
1908.01423
2966059979
Games are often designed to shape player behavior in a desired way; however, it can be unclear how design decisions affect the space of behaviors in a game. Designers usually explore this space through human playtesting, which can be time-consuming and of limited effectiveness in exhausting the space of possible behaviors. In this paper, we propose the use of automated planning agents to simulate humans of varying skill levels to generate game playthroughs. Metrics can then be gathered from these playthroughs to evaluate the current game design and identify its potential flaws. We demonstrate this technique in two games: the popular word game Scrabble and a collectible card game of our own design named Cardonomicon. Using these case studies, we show how using simulated agents to model humans of varying skill levels allows us to extract metrics to describe game balance (in the case of Scrabble) and highlight potential design flaws (in the case of Cardonomicon).
Automated game generation researchers have simulated player behavior using search to reach goal states @cite_34 @cite_30 @cite_47 (possibly subject to constraints on the states visited in the reachability check @cite_22 ) and hand-coded heuristics for learnability @cite_5 or design aesthetics such as balance @cite_42 @cite_23 . By contrast, games user research and game analytics @cite_4 emphasizes aggregate properties of sets of player behaviors in a game, such as qualitative patterns in player solution strategies @cite_46 , metrics on game length or diversity of options @cite_35 , or clusters of types of players @cite_9 @cite_7 . Ideally, automated design analysis tools should facilitate reasoning on aggregate properties of player behavior in a game without requiring exhaustive user testing. As many game state spaces cannot be exhaustively sampled, there is a need for methods to sample potential player behaviors in a game @cite_27 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_22", "@cite_7", "@cite_46", "@cite_9", "@cite_42", "@cite_27", "@cite_23", "@cite_5", "@cite_47", "@cite_34" ], "mid": [ "2011974988", "2152729775", "2038193812", "2337798556" ], "abstract": [ "Estimating affective and cognitive states in conditions of rich human-computer interaction, such as in games, is a field of growing academic and commercial interest. Entertainment and serious games can benefit from recent advances in the field as, having access to predictors of the current state of the player (or learner) can provide useful information for feeding adaptation mechanisms that aim to maximize engagement or learning effects. In this paper, we introduce a large data corpus derived from 58 participants that play the popular Super Mario Bros platform game and attempt to create accurate models of player experience for this game genre. Within the view of the current research, features extracted both from player gameplay behavior and game levels, and player visual characteristics have been used as potential indicators of reported affect expressed as pairwise preferences between different game sessions. Using neuroevolutionary preference learning and automatic feature selection, highly accurate models of reported engagement, frustration, and challenge are constructed (model accuracies reach 91 , 92 , and 88 for engagement, frustration, and challenge, respectively). As a step further, the derived player experience models can be used to personalize the game level to desired levels of engagement, frustration, and challenge as game content is mapped to player experience through the behavioral and expressivity patterns of each player.", "Developing computer-controlled groups to engage in combat, control the use of limited resources, and create units and buildings in real-time strategy (RTS) games is a novel application in game AI. However, tightly controlled online commercial game pose challenges to researchers interested in observing player activities, constructing player strategy models, and developing practical AI technology in them. Instead of setting up new programming environments or building a large amount of agentpsilas decision rules by playerpsilas experience for conducting real-time AI research, the authors use replays of the commercial RTS game StarCraft to evaluate human player behaviors and to construct an intelligent system to learn human-like decisions and behaviors. A case-based reasoning approach was applied for the purpose of training our system to learn and predict player strategies. Our analysis indicates that the proposed system is capable of learning and predicting individual player strategies, and that players provide evidence of their personal characteristics through their building construction order.", "We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves determine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal, for example, reach a target state, the question of winning is thus a probabilistic one: what is the maximal probability of winning from a given state? On these game structures, two fundamental notions are those of equivalences and metrics. Given a set of winning conditions, two states are equivalent if the players can win the same games with the same probability from both states. Metrics provide a bound on the difference in the probabilities of winning across states, capturing a quantitative notion of state similarity. We introduce equivalences and metrics for two-player game structures, and we show that they characterize the difference in probability of winning games whose goals are expressed in the quantitative mu-calculus. The quantitative mu-calculus can express a large set of goals, including reachability, safety, and omega-regular properties. Thus, we claim that our relations and metrics provide the canonical extensions to games, of the classical notion of bisimulation for transition systems. We develop our results both for equivalences and metrics, which generalize bisimulation, and for asymmetrical versions, which generalize simulation.", "We propose a hybrid model for automatically acquiring a policy for a complex game, which combines online learning with mining knowledge from a corpus of human game play. Our hypothesis is that a player that learns its policies by combining (online) exploration with biases towards human behaviour that's attested in a corpus of humans playing the game will outperform any agent that uses only one of the knowledge sources. During game play, the agent extracts similar moves made by players in the corpus in similar situations, and approximates their utility alongside other possible options by performing simulations from its current state. We implement and assess our model in an agent playing the complex win-lose board game Settlers of Catan, which lacks an implementation that would challenge a human expert. The results from the preliminary set of experiments illustrate the potential of such a joint model." ] }
1908.01423
2966059979
Games are often designed to shape player behavior in a desired way; however, it can be unclear how design decisions affect the space of behaviors in a game. Designers usually explore this space through human playtesting, which can be time-consuming and of limited effectiveness in exhausting the space of possible behaviors. In this paper, we propose the use of automated planning agents to simulate humans of varying skill levels to generate game playthroughs. Metrics can then be gathered from these playthroughs to evaluate the current game design and identify its potential flaws. We demonstrate this technique in two games: the popular word game Scrabble and a collectible card game of our own design named Cardonomicon. Using these case studies, we show how using simulated agents to model humans of varying skill levels allows us to extract metrics to describe game balance (in the case of Scrabble) and highlight potential design flaws (in the case of Cardonomicon).
Evaluating potential player behavior for an arbitrary game design is a central concern of general game playing research @cite_41 @cite_8 . Recently, Monte Carlo Tree Search (MCTS) has emerged as a popular technique in general game playing after the successful application of MCTS to the game of Go @cite_10 . Game applications of MCTS include: card selection and play in Magic: The Gathering @cite_36 @cite_3 ; platformer level completion @cite_37 @cite_38 ; simulations for fitness function heuristics in strategy @cite_17 , card @cite_28 , abstract real-time planning @cite_24 , and general arcade games @cite_21 ; and high-level play in board games including Reversi and Hex @cite_1 . MCTS offers the advantages of being game-agnostic, having tunable computational cost, and guaranteeing (eventual) complete exploration of the search space. Unlike previous uses for (near-) optimal agent play, we use MCTS to sample playtraces in a game while varying agent computational bounds as a proxy for player skill. Our approach trades exhaustively exploring a small or abstracted design space for sampling larger, non-deterministic game domains, complementing prior work in this area.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_8", "@cite_41", "@cite_36", "@cite_28", "@cite_21", "@cite_1", "@cite_3", "@cite_24", "@cite_10", "@cite_17" ], "mid": [ "2077755260", "2065339974", "1991331404", "2038245069" ], "abstract": [ "In this paper, we examine the use of Monte Carlo tree search (MCTS) for a variant of one of the most popular and profitable games in the world: the card game Magic: The Gathering (M:TG). The game tree for M:TG has a range of distinctive features, which we discuss here; it has incomplete information through the opponent's hidden cards and randomness through card drawing from a shuffled deck. We investigate a wide range of approaches that use determinization, where all hidden and random information is assumed known to all players, alongside MCTS. We consider a number of variations to the rollout strategy using a range of levels of sophistication and expert knowledge, and decaying reward to encourage play urgency. We examine the effect of utilizing various pruning strategies in order to increase the information gained from each determinization, alongside methods that increase the relevance of random choices. Additionally, we deconstruct the move generation procedure into a binary yes no decision tree and apply MCTS to this finer grained decision process. We compare our modifications to a basic MCTS approach for M:TG using fixed decks, and show that significant improvements in playing strength can be obtained.", "In this paper, Monte Carlo tree search (MCTS) is introduced for controlling the Pac-Man character in the real-time game Ms Pac-Man. MCTS is used to find an optimal path for an agent at each turn, determining the move to make based on the results of numerous randomized simulations. Several enhancements are introduced in order to adapt MCTS to the real-time domain. Ms Pac-Man is an arcade game, in which the protagonist has several goals but no conclusive terminal state. Unlike games such as Chess or Go there is no state in which the player wins the game. Instead, the game has two subgoals, 1) surviving and 2) scoring as many points as possible. Decisions must be made in a strict time constraint of 40 ms. The Pac-Man agent has to compete with a range of different ghost teams, hence limited assumptions can be made about their behavior. In order to expand the capabilities of existing MCTS agents, four enhancements are discussed: 1) a variable-depth tree; 2) simulation strategies for the ghost team and Pac-Man; 3) including long-term goals in scoring; and 4) reusing the search tree for several moves with a decay factor γ. The agent described in this paper was entered in both the 2012 World Congress on Computational Intelligence (WCCI'12, Brisbane, Qld., Australia) and the 2012 IEEE Conference on Computational Intelligence and Games (CIG'12, Granada, Spain) Pac-Man Versus Ghost Team competitions, where it achieved second and first places, respectively. In the experiments, we show that using MCTS is a viable technique for the Pac-Man agent. Moreover, the enhancements improve overall performance against four different ghost teams.", "Monte Carlo Tree Search (MCTS) is applied to control the player character in a clone of the popular platform game Super Mario Bros. Standard MCTS is applied through search in state space with the goal of moving the furthest to the right as quickly as possible. Despite parameter tuning, only moderate success is reached. Several modifications to the algorithm are then introduced specifically to deal with the behavioural pathologies that were observed. Two of the modifications are to our best knowledge novel. A combination of these modifications is found to lead to almost perfect play on linear levels. Furthermore, when adding noise to the benchmark, MCTS outperforms the best known algorithm for these levels. The analysis and algorithmic innovations in this paper are likely to be useful when applying MCTS to other video games.", "Monte Carlo tree search (MCTS) has brought about great success regarding the evaluation of stochastic and deterministic games in recent years. We present and empirically analyze a data-driven parallelization approach for MCTS targeting large HPC clusters with Infiniband interconnect. Our implementation is based on OpenMPI and makes extensive use of its RDMA based asynchronous tiny message communication capabilities for effectively overlapping communication and computation. We integrate our parallel MCTS approach termed UCT-Treesplit in our state-of-the-art Go engine Gomorra and measure its strengths and limitations in a real-world setting. Our extensive experiments show that we can scale up to 128 compute nodes and 2048 cores in self-play experiments and, furthermore, give promising directions for additional improvement. The generality of our parallelization approach advocates its use to significantly improve the search quality of a huge number of current MCTS applications." ] }
1908.01207
2965683718
Modeling sequential interactions between users and items products is crucial in domains such as e-commerce, social networking, and education. Representation learning presents an attractive opportunity to model the dynamic evolution of users and items, where each user item can be embedded in a Euclidean space and its evolution can be modeled by an embedding trajectory in this space. However, existing dynamic embedding methods generate embeddings only when users take actions and do not explicitly model the future trajectory of the user item in the embedding space. Here we propose JODIE, a coupled recurrent neural network model that learns the embedding trajectories of users and items. JODIE employs two recurrent neural networks to update the embedding of a user and an item at every interaction. Crucially, JODIE also models the future embedding trajectory of a user item. To this end, it introduces a novel projection operator that learns to estimate the embedding of the user at any time in the future. These estimated embeddings are then used to predict future user-item interactions. To make the method scalable, we develop a t-Batch algorithm that creates time-consistent batches and leads to 9x faster training. We conduct six experiments to validate JODIE on two prediction tasks---future interaction prediction and state change prediction---using four real-world datasets. We show that JODIE outperforms six state-of-the-art algorithms in these tasks by at least 20 in predicting future interactions and 12 in state change prediction.
Several models have recently been developed that generate embeddings for the nodes (users and items) in temporal networks. CTDNE @cite_15 is a state-of-the-art algorithm that generates embeddings using temporally-increasing random walks, but it generates one final static embedding of the nodes. Similarly, IGE @cite_12 generates one final embedding of users and items from interaction graphs. Therefore, both these methods (CTDNE and IGE) need to be re-run for every new edge to create dynamic embeddings. Another recent algorithm, DynamicTriad @cite_38 learns dynamic embeddings but does not work on bipartite interaction networks as it requires the presence of triads. Other recent algorithms such as DDNE @cite_44 , DANE @cite_51 , DynGem @cite_40 , @cite_41 , and @cite_48 learn embeddings from a sequence of graph snapshots, which is not applicable to our setting of continuous interaction data. Recent models such as NP-GLM model @cite_46 , DGNN @cite_43 , and DyRep @cite_31 learn embeddings from persistent links between nodes, which do not exist in interaction networks as the edges represent instantaneous interactions.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_41", "@cite_48", "@cite_46", "@cite_44", "@cite_43", "@cite_40", "@cite_15", "@cite_51", "@cite_12" ], "mid": [ "2767597557", "2903096638", "2808908091", "2951077644" ], "abstract": [ "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors stocks and users businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.", "Modeling a sequence of interactions between users and items (e.g., products, posts, or courses) is crucial in domains such as e-commerce, social networking, and education to predict future interactions. Representation learning presents an attractive solution to model the dynamic evolution of user and item properties, where each user item can be embedded in a euclidean space and its evolution can be modeled by dynamic changes in embedding. However, existing embedding methods either generate static embeddings, treat users and items independently, or are not scalable. Here we present JODIE, a coupled recurrent model to jointly learn the dynamic embeddings of users and items from a sequence of user-item interactions. JODIE has three components. First, the update component updates the user and item embedding from each interaction using their previous embeddings with the two mutually-recursive Recurrent Neural Networks. Second, a novel projection component is trained to forecast the embedding of users at any future time. Finally, the prediction component directly predicts the embedding of the item in a future interaction. For models that learn from a sequence of interactions, traditional training data batching cannot be done due to complex user-user dependencies. Therefore, we present a novel batching algorithm called t-Batch that generates time-consistent batches of training data that can run in parallel, giving massive speed-up. We conduct six experiments on two prediction tasks---future interaction prediction and state change prediction---using four real-world datasets. We show that JODIE outperforms six state-of-the-art algorithms in these tasks by up to 22.4 . Moreover, we show that JODIE is highly scalable and up to 9.2x faster than comparable models. As an additional experiment, we illustrate that JODIE can predict student drop-out from courses five interactions in advance.", "Given the rich real-life applications of network mining as well as the surge of representation learning in recent years, network embedding has become the focal point of increasing research interests in both academic and industrial domains. Nevertheless, the complete temporal formation process of networks characterized by sequential interactive events between nodes has yet seldom been modeled in the existing studies, which calls for further research on the so-called temporal network embedding problem. In light of this, in this paper, we introduce the concept of neighborhood formation sequence to describe the evolution of a node, where temporal excitation effects exist between neighbors in the sequence, and thus we propose a Hawkes process based Temporal Network Embedding (HTNE) method. HTNE well integrates the Hawkes process into network embedding so as to capture the influence of historical neighbors on the current neighbors. In particular, the interactions of low-dimensional vectors are fed into the Hawkes process as base rate and temporal influence, respectively. In addition, attention mechanism is also integrated into HTNE to better determine the influence of historical neighbors on current neighbors of a node. Experiments on three large-scale real-life networks demonstrate that the embeddings learned from the proposed HTNE model achieve better performance than state-of-the-art methods in various tasks including node classification, link prediction, and embedding visualization. In particular, temporal recommendation based on arrival rate inferred from node embeddings shows excellent predictive power of the proposed model.", "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning." ] }
1908.01367
2965871844
For ego-motion estimation, the feature representation of the scenes is crucial. Previous methods indicate that both the low-level and semantic feature-based methods can achieve promising results. Therefore, the incorporation of hierarchical feature representation may benefit from both methods. From this perspective, we propose a novel direct feature odometry framework, named DFO, for depth estimation and hierarchical feature representation learning from monocular videos. By exploiting the metric distance, our framework is able to learn the hierarchical feature representation without supervision. The pose is obtained with a coarse-to-fine approach from high-level to low-level features in enlarged feature maps. The pixel-level attention mask can be self-learned to provide the prior information. In contrast to the previous methods, our proposed method calculates the camera motion with a direct method rather than regressing the ego-motion from the pose network. With this approach, the consistency of the scale factor of translation can be constrained. Additionally, the proposed method is thus compatible with the traditional SLAM pipeline. Experiments on the KITTI dataset demonstrate the effectiveness of our method.
The SLAM field was dominated by feature-based (indirect) methods for a long time. In recent years, a number of different approaches have gained in popularity, such as direct methods and semantic SLAM @cite_10 . Unlike feature-based (indirect) visual odometry methods (e.g., ORB-SLAM @cite_21 ), direct visual odometry methods use the image intensities information directly to predict the motion and geometry of the scenes. Hence, feature points do not need to be extracted and matched, and the data association and pose estimation are expressed as a photometric loss function. The direct visual odometry methods can achieve higher accuracy and robustness for scenes with few feature points. Semantic SLAM, on the other hand, attempts to locate the robots and map the environments with objects or geometric information.
{ "cite_N": [ "@cite_21", "@cite_10" ], "mid": [ "2963218091", "2767841593", "612478963", "2411649036" ], "abstract": [ "In this paper, we develop a robust efficient visual SLAM system that utilizes heterogeneous point and line features. By leveraging ORB-SLAM [1], the proposed system consists of stereo matching, frame tracking, local mapping, loop detection, and bundle adjustment of both point and line features. In particular, as the main theoretical contributions of this paper, we, for the first time, employ the orthonormal representation as the minimal parameterization to model line features along with point features in visual SLAM and analytically derive the Jacobians of the re-projection errors with respect to the line parameters, which significantly improves the SLAM solution. The proposed SLAM has been extensively tested in both synthetic and real-world experiments whose results demonstrate that the proposed system outperforms the state-of-the-art methods in various scenarios.", "We propose a visual SLAM (Simultaneous Localization And Mapping) system able to perform robustly in populated environments. The image stream from a moving RGB-D camera is the only input to the system. The computed map in real-time is composed of two layers: 1) The unpopulated geometrical layer, which describes the geometry of the bare scene as an occupancy grid where pieces of information corresponding to people have been removed; 2) A semantic human activity layer, which describes the trajectory of each person with respect to the unpopulated map, labelling an area as \"traversable\" or \"occupied\". Our proposal is to embed a real-time human tracker into the system. The purpose is twofold. First, to mask out of the rigid SLAM pipeline the image regions occupied by people, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map in populated scenes. Secondly, to estimate the full trajectory of each detected person with respect to the scene map, irrespective of the location of the moving camera when the person was imaged. The proposal is tested with two popular visual SLAM systems, C2TAM and ORBSLAM2, proving its generality. The experiments process a benchmark of RGB-D sequences from camera onboard a mobile robot. They prove the robustness, accuracy and reuse capabilities of the two layer map for populated scenes.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "The so-called direct visual SLAM methods have shown a great potential in estimating a semidense or fully dense reconstruction of the scene, in contrast to the sparse reconstructions of the traditional feature-based algorithms. In this paper, we propose for the first time a direct, tightly-coupled formulation for the combination of visual and inertial data. Our algorithm runs in real-time on a standard CPU. The processing is split in three threads. The first thread runs at frame rate and estimates the camera motion by a joint non-linear optimization from visual and inertial data given a semidense map. The second one creates a semidense map of high-gradient areas only for camera tracking purposes. Finally, the third thread estimates a fully dense reconstruction of the scene at a lower frame rate. We have evaluated our algorithm in several real sequences with ground truth trajectory data, showing a state-of-the-art performance." ] }
1908.01367
2965871844
For ego-motion estimation, the feature representation of the scenes is crucial. Previous methods indicate that both the low-level and semantic feature-based methods can achieve promising results. Therefore, the incorporation of hierarchical feature representation may benefit from both methods. From this perspective, we propose a novel direct feature odometry framework, named DFO, for depth estimation and hierarchical feature representation learning from monocular videos. By exploiting the metric distance, our framework is able to learn the hierarchical feature representation without supervision. The pose is obtained with a coarse-to-fine approach from high-level to low-level features in enlarged feature maps. The pixel-level attention mask can be self-learned to provide the prior information. In contrast to the previous methods, our proposed method calculates the camera motion with a direct method rather than regressing the ego-motion from the pose network. With this approach, the consistency of the scale factor of translation can be constrained. Additionally, the proposed method is thus compatible with the traditional SLAM pipeline. Experiments on the KITTI dataset demonstrate the effectiveness of our method.
These methods may fail under certain conditions such as low texture, stereo ambiguities and occlusions. The quality of feature points is essential for the feature-based SLAM method. Agrawal al @cite_15 show that features learned from a convolution neural network outperform the traditional hand-designed features. Learning-based slam may become a promising direction.
{ "cite_N": [ "@cite_15" ], "mid": [ "2768755505", "2401154299", "2963540914", "2963218091" ], "abstract": [ "Visual SLAM (Simultaneous Localization and Mapping) methods typically rely on handcrafted visual features or raw RGB values for establishing correspondences between images. These features, while suitable for sparse mapping, often lead to ambiguous matches at texture-less regions when performing dense reconstruction due to the aperture problem. In this work, we explore the use of learned features for the matching task in dense monocular reconstruction. We propose a novel convolutional neural network (CNN) architecture along with a deeply supervised feature learning scheme for pixel-wise regression of visual descriptors from an image which are best suited for dense monocular SLAM. In particular, our learning scheme minimizes a multi-view matching cost-volume loss with respect to the regressed features at multiple stages within the network, for explicitly learning contextual features that are suitable for dense matching between images captured by a moving monocular camera along the epipolar line. We utilize the learned features from our model for depth estimation inside a real-time dense monocular SLAM framework, where photometric error is replaced by our learned descriptor error. Our evaluation on several challenging indoor scenes demonstrate greatly improved accuracy in dense reconstructions of the well celebrated dense SLAM systems like DTAM, without compromising their real-time performance.", "Although deep convolutional neural networks (CNNs) have shown remarkable results for feature learning and prediction tasks, many recent studies have demonstrated improved performance by incorporating additional handcrafted features or by fusing predictions from multiple CNNs. Usually, these combinations are implemented via feature concatenation or by averaging output prediction scores from several CNNs. In this paper, we present new approaches for combining different sources of knowledge in deep learning. First, we propose feature amplification, where we use an auxiliary, hand-crafted, feature (e.g. optical flow) to perform spatially varying soft-gating on intermediate CNN feature maps. Second, we present a spatially varying multiplicative fusion method for combining multiple CNNs trained on different sources that results in robust prediction by amplifying or suppressing the feature activations based on their agreement. We test these methods in the context of action recognition where information from spatial and temporal cues is useful, obtaining results that are comparable with state-of-the-art methods and outperform methods using only CNNs and optical flow features.", "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https: github.com JiahuiYu generative_inpainting.", "In this paper, we develop a robust efficient visual SLAM system that utilizes heterogeneous point and line features. By leveraging ORB-SLAM [1], the proposed system consists of stereo matching, frame tracking, local mapping, loop detection, and bundle adjustment of both point and line features. In particular, as the main theoretical contributions of this paper, we, for the first time, employ the orthonormal representation as the minimal parameterization to model line features along with point features in visual SLAM and analytically derive the Jacobians of the re-projection errors with respect to the line parameters, which significantly improves the SLAM solution. The proposed SLAM has been extensively tested in both synthetic and real-world experiments whose results demonstrate that the proposed system outperforms the state-of-the-art methods in various scenarios." ] }
1908.01367
2965871844
For ego-motion estimation, the feature representation of the scenes is crucial. Previous methods indicate that both the low-level and semantic feature-based methods can achieve promising results. Therefore, the incorporation of hierarchical feature representation may benefit from both methods. From this perspective, we propose a novel direct feature odometry framework, named DFO, for depth estimation and hierarchical feature representation learning from monocular videos. By exploiting the metric distance, our framework is able to learn the hierarchical feature representation without supervision. The pose is obtained with a coarse-to-fine approach from high-level to low-level features in enlarged feature maps. The pixel-level attention mask can be self-learned to provide the prior information. In contrast to the previous methods, our proposed method calculates the camera motion with a direct method rather than regressing the ego-motion from the pose network. With this approach, the consistency of the scale factor of translation can be constrained. Additionally, the proposed method is thus compatible with the traditional SLAM pipeline. Experiments on the KITTI dataset demonstrate the effectiveness of our method.
Zhou al @cite_38 propose a framework that jointly trains the pose estimation and depth prediction networks. In addition to the depth maps and camera poses, the work of Vijayanarasimhan al @cite_6 can predict scene motion from an additional subnetwork. To improve the quality of estimated depth values, several constraints are introduced. In @cite_7 , optical flow estimation is included as an additional module to segment the scenes with rigid structure and nonrigid objects. Mahjourian al @cite_9 , on the other hand, incorporate the estimated point clouds of two consecutive images as additional constraints. Due to a lack of supervised signals (e.g., the ground truth depth values, poses and stereo images), scale variance of translation and depths may induce vanishing depth problem during training. Wang al @cite_29 observe this phenomenon and alleviate it by normalizing the inverse depth maps. The depth normalization trick can greatly improve the performance of relative depth estimation at the expense of the scale factor.
{ "cite_N": [ "@cite_38", "@cite_7", "@cite_9", "@cite_29", "@cite_6" ], "mid": [ "2909119029", "2892197942", "2024336175", "2949634581" ], "abstract": [ "This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.", "This paper presents an unsupervised deep learning framework called UnDEMoN for estimating dense depth map and 6-DoF camera pose information directly from monocular images. The proposed network is trained using unlabeled monocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. These improvements are achieved by introducing a new objective function that aims to minimize spatial as well as temporal reconstruction losses simultaneously. These losses are defined using bi-linear sampling kernel and penalized using the Charbonnier penalty function. The objective function, thus created, provides robustness to image gradient noises thereby improving the overall estimation accuracy without resorting to any coarse to fine strategies which are currently prevalent in the literature. Another novelty lies in the fact that we combine a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6 DOF Camera pose and superior depth map. The effectiveness of the proposed approach is demonstrated through performance comparison with the existing supervised and unsupervised methods on the KITTI driving dataset.", "Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation." ] }
1908.01367
2965871844
For ego-motion estimation, the feature representation of the scenes is crucial. Previous methods indicate that both the low-level and semantic feature-based methods can achieve promising results. Therefore, the incorporation of hierarchical feature representation may benefit from both methods. From this perspective, we propose a novel direct feature odometry framework, named DFO, for depth estimation and hierarchical feature representation learning from monocular videos. By exploiting the metric distance, our framework is able to learn the hierarchical feature representation without supervision. The pose is obtained with a coarse-to-fine approach from high-level to low-level features in enlarged feature maps. The pixel-level attention mask can be self-learned to provide the prior information. In contrast to the previous methods, our proposed method calculates the camera motion with a direct method rather than regressing the ego-motion from the pose network. With this approach, the consistency of the scale factor of translation can be constrained. Additionally, the proposed method is thus compatible with the traditional SLAM pipeline. Experiments on the KITTI dataset demonstrate the effectiveness of our method.
Currently, determining the approach to incorporate the traditional SLAM methods with deep neural networks is still an open problem. The depth prediction neural network can be used in a variety of scenarios, such as keyframe initialization @cite_46 , scale recovery and virtual stereo constraints @cite_20 . CodeSLAM @cite_39 has been proposed as a framework for dense representation of scene geometry using a variational autoencoder. The pose network, however, cannot tightly integrate with the widely applied modules in traditional visual odometry methods (e.g., bundle adjustment, keyframe selection and loop closure) because it cannot locate the corresponding features in the feature maps.
{ "cite_N": [ "@cite_46", "@cite_20", "@cite_39" ], "mid": [ "2952280228", "2300779272", "2119493293", "2768755505" ], "abstract": [ "Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, yielding semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.", "We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone.", "Visual SLAM (Simultaneous Localization and Mapping) methods typically rely on handcrafted visual features or raw RGB values for establishing correspondences between images. These features, while suitable for sparse mapping, often lead to ambiguous matches at texture-less regions when performing dense reconstruction due to the aperture problem. In this work, we explore the use of learned features for the matching task in dense monocular reconstruction. We propose a novel convolutional neural network (CNN) architecture along with a deeply supervised feature learning scheme for pixel-wise regression of visual descriptors from an image which are best suited for dense monocular SLAM. In particular, our learning scheme minimizes a multi-view matching cost-volume loss with respect to the regressed features at multiple stages within the network, for explicitly learning contextual features that are suitable for dense matching between images captured by a moving monocular camera along the epipolar line. We utilize the learned features from our model for depth estimation inside a real-time dense monocular SLAM framework, where photometric error is replaced by our learned descriptor error. Our evaluation on several challenging indoor scenes demonstrate greatly improved accuracy in dense reconstructions of the well celebrated dense SLAM systems like DTAM, without compromising their real-time performance." ] }
1908.01294
2966008508
A sentence is typically treated as the minimal syntactic unit used for extracting valuable information from a longer piece of text. However, in written Thai, there are no explicit sentence markers. We proposed a deep learning model for the task of sentence segmentation that includes three main contributions. First, we integrate n-gram embedding as a local representation to capture word groups near sentence boundaries. Second, to focus on the keywords of dependent clauses, we combine the model with a distant representation obtained from self-attention modules. Finally, due to the scarcity of labeled data, for which annotation is difficult and time-consuming, we also investigate and adapt Cross-View Training (CVT) as a semi-supervised learning technique, allowing us to utilize unlabeled data to improve the model representations. In the Thai sentence segmentation experiments, our model reduced the relative error by 7.4 and 10.5 compared with the baseline models on the Orchid and UGWC datasets, respectively. We also applied our model to the task of pronunciation recovery on the IWSLT English dataset. Our model outperformed the prior sequence tagging models, achieving a relative error reduction of 2.5 . Ablation studies revealed that utilizing n-gram presentations was the main contributing factor for Thai, while the semi-supervised training helped the most for English.
Focusing only on textual features, there are two main approaches, namely, word sequence tagging and machine translation. For the machine translation approach, punctuation is treated as just another type of token that needs to be recovered and included in the output. The methods in @cite_23 @cite_11 @cite_24 restore punctuation by translating from unpunctuated text to punctuated text. However, our main task, sentence segmentation, is an upstream task in text processing, unlike punctuation restoration, which is considered a downstream task. Therefore, the task needs to operate rapidly; consequently, we focus only on the sequence tagging model, which is less complex than the machine translation model.
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_11" ], "mid": [ "1412698887", "2165664509", "2100796029", "2555428947" ], "abstract": [ "A Chinese sentence is represented as a sequence of characters, and words are not separated from each other. In statistical machine translation, the conventional approach is to segment the Chinese character sequence into words during the pre-processing. The training and translation are performed afterwards. However, this method is not optimal for two reasons: 1. The segmentations may be erroneous. 2. For a given character sequence, the best segmentation depends on its context and translation. In order to minimize the translation errors, we take different segmentation alternatives instead of a single segmentation into account and integrate the segmentation process with the search for the best translation. The segmentation decision is only taken during the generation of the translation. With this method we are able to translate Chinese text at the character level. The experiments on the IWSLT 2005 task showed improvements in the translation performance using two translation systems: a phrase-based system and a finite state transducer based system. For the phrase-based system, the improvement of the BLEU score is 1.5 absolute.", "Standard approaches to Chinese word segmentation treat the problem as a tagging task, assigning labels to the characters in the sequence indicating whether the character marks a word boundary. Discriminatively trained models based on local character features are used to make the tagging decisions, with Viterbi decoding finding the highest scoring segmentation. In this paper we propose an alternative, word-based segmentor, which uses features based on complete words and word sequences. The generalized perceptron algorithm is used for discriminative training, and we use a beamsearch decoder. Closed tests on the first and second SIGHAN bakeoffs show that our system is competitive with the best in the literature, achieving the highest reported F-scores for a number of corpora.", "Abstract A system for part-of-speech tagging is described. It is based on a hidden Markov model which can be trained using a corpus of untagged text. Several techniques are introduced to achieve robustness while maintaining high performance. Word equivalence classes are used to reduce the overall number of parameters in the model, alleviating the problem of obtaining reliable estimates for individual words. The context for category prediction is extended selectively via predefined networks, rather than using a uniformly higher-order conditioning which requires exponentially more parameters with increasing context. The networks are embedded in a first-order model and network structure is developed by analysis of erros, and also via linguistic considerations. To compensate for incomplete dictionary coverage, the categories of unknown words are predicted using both local context and suffix information to aid in disambiguation. An evaluation was performed using the Brown corpus and different dictionary arrangements were investigated. The techniques result in a model that correctly tags approximately 96 of the text. The flexibility of the methods is illustrated by their use in a tagging program for French.", "This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English->German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves an improvement of 1.3 BLEU from the previous best models on both WMT'14 and WMT'15 English->German. On summarization, our method beats the supervised learning baseline." ] }
1908.01293
2965124455
Visual localization is the problem of estimating a camera within a scene and a key component in computer vision applications such as self-driving cars and Mixed Reality. State-of-the-art approaches for accurate visual localization use scene-specific representations, resulting in the overhead of constructing these models when applying the techniques to new scenes. Recently, deep learning-based approaches based on relative pose estimation have been proposed, carrying the promise of easily adapting to new scenes. However, it has been shown such approaches are currently significantly less accurate than state-of-the-art approaches. In this paper, we are interested in analyzing this behavior. To this end, we propose a novel framework for visual localization from relative poses. Using a classical feature-based approach within this framework, we show state-of-the-art performance. Replacing the classical approach with learned alternatives at various levels, we then identify the reasons for why deep learned approaches do not perform well. Based on our analysis, we make recommendations for future work.
Machine learning in visual localization. Retrieval methods @cite_62 @cite_39 @cite_3 have benefitted greatly from deep learning architectures. Several works have been proposed to directly learn the 2D-3D matching function @cite_37 @cite_0 @cite_57 @cite_1 @cite_28 @cite_79 @cite_45 While traditionally a dense 3D scene model is required for training, a recent work has shown that similar accuracy can be achieved without the need of a 3D model during training @cite_0 . The main drawback of these approaches, besides having problems scaling to larger scenes @cite_0 @cite_84 , is that the learned models are scene-dependent as they regress 3D coordinates. Recent work has shown the ability to adapt a model trained on one scene to new scenes on-the-fly @cite_57 . Yet, @cite_57 considers the problem of re-localization against a trajectory while we consider the problem of localization from a single image.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_28", "@cite_1", "@cite_3", "@cite_39", "@cite_0", "@cite_57", "@cite_79", "@cite_45", "@cite_84" ], "mid": [ "2739492061", "2964175348", "1989379388", "2771385090" ], "abstract": [ "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "Large scale 3D image localization requires computationally expensive matching between 2D feature points in the query image and a 3D point cloud. In this paper, we present a method to accelerate the matching process and to reduce the memory footprint by analyzing the view-statistics of points in a training corpus. Given a training image set that is representative of common views of a scene, our approach identifies a compact subset of the 3D point cloud for efficient localization, while achieving comparable localization performance to using the full 3D point cloud. We demonstrate that the problem can be precisely formulated as a mixed-integer quadratic program and present a pointwise descriptor calibration process to improve matching. We show that our algorithm outperforms the state-of-theart greedy algorithm on standard datasets, on measures of both point-cloud compression and localization accuracy.", "Precise localization of robots is imperative for their safe and autonomous navigation in both indoor and outdoor environments. In outdoor scenarios, the environment typically undergoes significant perceptual changes and requires robust methods for accurate localization. Monocular camera-based approaches provide an inexpensive solution to such challenging problems compared to 3D LiDAR-based methods. Recently, approaches have leveraged deep convolutional neural networks (CNNs) to perform place recognition and they turn out to outperform traditional handcrafted features under challenging perceptual conditions. In this paper, we propose an approach for directly regressing a 6-DoF camera pose using CNNs and a single monocular RGB image. We leverage the idea of transfer learning for training our network as this technique has shown to perform better when the number of training samples are not very high. Furthermore, we propose novel data augmentation in 3D space for additional pose coverage which leads to more accurate localization. In contrast to the traditional visual metric localization approaches, our resulting map size is constant with respect to the database. During localization, our approach has a constant time complexity of O(1) and is independent of the database size and runs in real-time at ∼80 Hz using a single GPU. We show the localization accuracy of our approach on publicly available datasets and that it outperforms CNN-based state-of-the-art methods." ] }
1908.01293
2965124455
Visual localization is the problem of estimating a camera within a scene and a key component in computer vision applications such as self-driving cars and Mixed Reality. State-of-the-art approaches for accurate visual localization use scene-specific representations, resulting in the overhead of constructing these models when applying the techniques to new scenes. Recently, deep learning-based approaches based on relative pose estimation have been proposed, carrying the promise of easily adapting to new scenes. However, it has been shown such approaches are currently significantly less accurate than state-of-the-art approaches. In this paper, we are interested in analyzing this behavior. To this end, we propose a novel framework for visual localization from relative poses. Using a classical feature-based approach within this framework, we show state-of-the-art performance. Replacing the classical approach with learned alternatives at various levels, we then identify the reasons for why deep learned approaches do not perform well. Based on our analysis, we make recommendations for future work.
Learning absolute pose estimation. Recent works have proposed to learn the complete localization pipeline, by either modeling localization as a classification problem @cite_18 or learning to directly regress the absolute pose of a query image @cite_14 @cite_76 @cite_22 @cite_54 @cite_20 @cite_91 . These methods typically only require images and their corresponding camera poses as training data and minimize a loss on the predicted camera poses @cite_14 @cite_22 . At the same time, it has been shown that using 2D-3D matches as part of the loss function can lead to more accurate results @cite_22 . Similar to the approaches described above, the learned representations are scene-dependent and do not generalize.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_91", "@cite_54", "@cite_76", "@cite_20" ], "mid": [ "2216595815", "2792747672", "2522940611", "1547236731" ], "abstract": [ "Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches - whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.", "We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D poses of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our Localization-Classification-Regression architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests candidate poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our method recovers full-body 2D and 3D poses, hallucinating plausible body parts when the persons are partially occluded or truncated by the image boundary. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark.", "Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.", "Real-time and reliable localization is a prerequisite for autonomously performing high-level tasks with micro aerial vehicles(MAVs). Nowadays, most existing methods use vision system for 6DoF pose estimation, which can not work in degraded visual environments. This paper presents an onboard 6DoF pose estimation method for an indoor MAV in challenging GPS-denied degraded visual environments by using a RGB-D camera. In our system, depth images are mainly used for odometry estimation and localization. First, a fast and robust relative pose estimation (6DoF Odometry) method is proposed, which uses the range rate constraint equation and photometric error metric to get the frame-to-frame transform. Then, an absolute pose estimation (6DoF Localization) method is proposed to locate the MAV in a given 3D global map by using a particle filter. The whole localization system can run in real-time on an embedded computer with low CPU usage. We demonstrate the effectiveness of our system in extensive real environments on a customized MAV platform. The experimental results show that our localization system can robustly and accurately locate the robot in various practical challenging environments." ] }
1908.01293
2965124455
Visual localization is the problem of estimating a camera within a scene and a key component in computer vision applications such as self-driving cars and Mixed Reality. State-of-the-art approaches for accurate visual localization use scene-specific representations, resulting in the overhead of constructing these models when applying the techniques to new scenes. Recently, deep learning-based approaches based on relative pose estimation have been proposed, carrying the promise of easily adapting to new scenes. However, it has been shown such approaches are currently significantly less accurate than state-of-the-art approaches. In this paper, we are interested in analyzing this behavior. To this end, we propose a novel framework for visual localization from relative poses. Using a classical feature-based approach within this framework, we show state-of-the-art performance. Replacing the classical approach with learned alternatives at various levels, we then identify the reasons for why deep learned approaches do not perform well. Based on our analysis, we make recommendations for future work.
Learning relative pose estimation. The authors of DeMoN @cite_55 propose a CNN that jointly predicts a depth map for one image and the relative pose to a second image. They require depth maps for training, whereas our approach does not. An unsupervised variant of DeMoN that is trained purely on a stream of images by using image synthesis as a supervisory loss function is presented in @cite_48 . The method is tested in an autonomous driving scenario that exhibits planar motion and it is unclear how well this approach would work in the 6DOF scenario with larger baselines considered in this paper.
{ "cite_N": [ "@cite_48", "@cite_55" ], "mid": [ "2785512290", "2963906250", "2060977605", "2739492061" ], "abstract": [ "We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the scene, enforcing consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.", "We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.1", "This paper proposes a pose-based algorithm to solve the full Simultaneous Localization And Mapping (SLAM) problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. A probabilistic scan matching technique using range scans gathered from a Mechanical Scanning Imaging Sonar (MSIS) is used together with the robot dead-reckoning displacements. The proposed method utilizes two Extended Kalman Filters (EKFs). The first, estimates the local path traveled by the robot while forming the scan as well as its uncertainty, providing position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augmented state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. Also, a method of estimating the uncertainty of the scan matching estimation is provided. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods." ] }
1908.01007
2966576392
Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning IML, wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice, Feedback Arbitration and Newtonian Action Advice, under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. Training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.
Reinforcement Learning is proven to be effective at learning agent control policies in 2D and 3D game environments including Atari @cite_19 , navigation in Minecraft environments @cite_22 @cite_13 @cite_2 , Doom @cite_16 , Quake II @cite_4 , DOTA 2, and StarCraft II. Most game environments are designed to have low perceptual aliasing to make it easier on human players even though the real world can often have high perceptual aliasing. 3D computer game environments provide a stepping stone to application of reinforcement learning in real world environments @cite_9 @cite_4 but sometimes overlook this aspect of the real world.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_9", "@cite_19", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "1594297126", "2951557330", "2749807327", "2151210636" ], "abstract": [ "Reinforcement learning is a type of unsupervised learning for sequential decision making. Q-learning is probably the best-understood reinforcement learning algorithm. In Q-learning, the agent learns a mapping from states and actions to their utilities. An important assumption of Q-learning is the Markovian environment assumption, meaning that any information needed to determine the optimal actions is reflected in the agent''s state representation. Consider an agent whose state representation is based solely on its immediate perceptual sensations. When its sensors are not able to make essential distinctions among world states, the Markov assumption is violated, causing a problem called perceptual aliasing. For example, when facing a closed box, an agent based on its current visual sensation cannot act optimally if the optimal action depends on the contents of the box. There are two basic approaches to addressing this problem -- using more sensors or using history to figure out the current world state. This paper studies three connectionist approaches which learn to use history to handle perceptual aliasing: the window-Q, recurrent-Q, and recurrent-model architectures. Empirical study of these architectures is presented. Their relative strengths and weaknesses are also discussed.", "Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research. We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions. Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state. This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations. We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab.", "This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game. This domain poses a new grand challenge for reinforcement learning, representing a more difficult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units; it has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward specification for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of StarCraft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make significant progress. Thus, SC2LE offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures.", "The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection. The Arcade Learning Environment (ALE) provides a set of Atari games that represent a useful benchmark set of such applications. A recent breakthrough in combining model-free reinforcement learning with deep learning, called DQN, achieves the best real-time agents thus far. Planning-based approaches achieve far higher scores than the best model-free approaches, but they exploit information that is not available to human players, and they are orders of magnitude slower than needed for real-time play. Our main goal in this work is to build a better real-time Atari game playing agent than DQN. The central idea is to use the slow planning-based agents to provide training data for a deep-learning architecture capable of real-time play. We proposed new agents based on this idea and show that they outperform DQN." ] }
1908.01007
2966576392
Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning IML, wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice, Feedback Arbitration and Newtonian Action Advice, under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. Training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.
Human feedback in IML can take different forms. Learning from Demonstration (LfD) allows humans to directly provide examples of proper behavior @cite_0 . From demonstrations, an agent can learn the policy directly, learn to explore more effectively @cite_18 , or learn a reward function from which to reconstruct a policy @cite_14 . However, it is not always feasible to provide demonstrations. Learning from Critique (LfC) allows human teachers to indicate that the agent is doing well or not doing well in order to bias the agent toward certain outcomes. Learning from Critique can also include human indication of preferences over variations in agent behavior @cite_21 . Learning from Advice (LfA) is similar to Learning from Critique, except the human teacher proactively advises the agent on the actions it should take instead of retroactively rewarding or punishing the agent for an action it already took. Since studies show that non-expert human teachers prefer giving action advice over critique @cite_11 we focus on Learning from Action.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_21", "@cite_0", "@cite_11" ], "mid": [ "2549514639", "2294422333", "2116157560", "1539975474" ], "abstract": [ "In order for robots to learn from people with no machine learning expertise, robots should learn from natural human instruction. Most machine learning techniques that incorporate explanations require people to use a limited vocabulary and provide state information, even if it is not intuitive. This paper discusses a software agent that learned to play the Mario Bros. game using explanations. Our goals to improve learning from explanations were twofold: 1) to filter explanations into advice and warnings and 2) to learn policies from sentences without state information. We used sentiment analysis to filter explanations into advice of what to do and warnings of what to avoid. We developed object-focused advice to represent what actions the agent should take when dealing with objects. A reinforcement learning agent used object-focused advice to learn policies that maximized its reward. After mitigating false negatives, using sentiment as a filter was approximately 85 accurate. object-focused advice performed better than when no advice was given, the agent learned where to apply the advice, and the agent could recover from adversarial advice. We also found the method of interaction should be designed to ease the cognitive load of the human teacher or the advice may be of poor quality.", "As computational agents are increasingly used beyond research labs, their success will depend on their ability to learn new skills and adapt to their dynamic, complex environments. If human users---without programming skills---can transfer their task knowledge to agents, learning can accelerate dramatically, reducing costly trials. The tamer framework guides the design of agents whose behavior can be shaped through signals of approval and disapproval, a natural form of human feedback. More recently, tamer+rl was introduced to enable human feedback to augment a traditional reinforcement learning (RL) agent that learns from a Markov decision process's (MDP) reward signal. We address limitations of prior work on tamer and tamer+rl, contributing in two critical directions. First, the four successful techniques for combining human reward with RL from prior tamer+rl work are tested on a second task, and these techniques' sensitivities to parameter changes are analyzed. Together, these examinations yield more general and prescriptive conclusions to guide others who wish to incorporate human knowledge into an RL algorithm. Second, tamer+rl has thus far been limited to a sequential setting, in which training occurs before learning from MDP reward. In this paper, we introduce a novel algorithm that shares the same spirit as tamer+rl but learns simultaneously from both reward sources, enabling the human feedback to come at any time during the reinforcement learning process. We call this algorithm simultaneous tamer+rl. To enable simultaneous learning, we introduce a new technique that appropriately determines the magnitude of the human model's influence on the RL algorithm throughout time and state-action space.", "As learning agents move from research labs to the real world, it is increasingly important that human users, including those without programming skills, be able to teach agents desired behaviors. Recently, the tamer framework was introduced for designing agents that can be interactively shaped by human trainers who give only positive and negative feedback signals. Past work on tamer showed that shaping can greatly reduce the sample complexity required to learn a good policy, can enable lay users to teach agents the behaviors they desire, and can allow agents to learn within a Markov Decision Process (MDP) in the absence of a coded reward function. However, tamer does not allow this human training to be combined with autonomous learning based on such a coded reward function. This paper leverages the fast learning exhibited within the tamer framework to hasten a reinforcement learning (RL) algorithm's climb up the learning curve, effectively demonstrating that human reinforcement and MDP reward can be used in conjunction with one another by an autonomous agent. We tested eight plausible tamer+rl methods for combining a previously learned human reinforcement function, H, with MDP reward in a reinforcement learning algorithm. This paper identifies which of these methods are most effective and analyzes their strengths and weaknesses. Results from these tamer+rl algorithms indicate better final performance and better cumulative performance than either a tamer agent or an RL agent alone.", "We consider the problem of incorporating end-user advice into reinforcement learning (RL). In our setting, the learner alternates between practicing, where learning is based on actual world experience, and end-user critique sessions where advice is gathered. During each critique session the end-user is allowed to analyze a trajectory of the current policy and then label an arbitrary subset of the available actions as good or bad. Our main contribution is an approach for integrating all of the information gathered during practice and critiques in order to effectively optimize a parametric policy. The approach optimizes a loss function that linearly combines losses measured against the world experience and the critique data. We evaluate our approach using a prototype system for teaching tactical battle behavior in a real-time strategy game engine. Results are given for a significant evaluation involving ten end-users showing the promise of this approach and also highlighting challenges involved in inserting end-users into the RL loop." ] }
1908.01007
2966576392
Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning IML, wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice, Feedback Arbitration and Newtonian Action Advice, under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. Training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.
We build upon a rich set of research exploring human feedback in reinforcement learning @cite_18 @cite_11 @cite_3 @cite_6 @cite_10 @cite_24 . A majority of this work assesses the algorithms in 2D grid-world environments where the agent’s @math location is a given feature. Some do address 3D @cite_17 , more simplistic 3D environments @cite_7 , or augment the agent with non-visual state information @cite_12 . Overall there are still open questions about the scalability of these algorithms in 3D environments.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_17", "@cite_6", "@cite_3", "@cite_24", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2950462959", "2807340089", "2891076394", "1556824961" ], "abstract": [ "Reinforcement Learning algorithms can learn complex behavioral patterns for sequential decision making tasks wherein an agent interacts with an environment and acquires feedback in the form of rewards sampled from it. Traditionally, such algorithms make decisions, i.e., select actions to execute, at every single time step of the agent-environment interactions. In this paper, we propose a novel framework, Fine Grained Action Repetition (FiGAR), which enables the agent to decide the action as well as the time scale of repeating it. FiGAR can be used for improving any Deep Reinforcement Learning algorithm which maintains an explicit policy estimate by enabling temporal abstractions in the action space. We empirically demonstrate the efficacy of our framework by showing performance improvements on top of three policy search algorithms in different domains: Asynchronous Advantage Actor Critic in the Atari 2600 domain, Trust Region Policy Optimization in Mujoco domain and Deep Deterministic Policy Gradients in the TORCS car racing domain.", "We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.", "The reinforcement learning (RL) community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequentialdecision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent’s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy - with a single set of weights - that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab.", "Temporally extended actions (e.g., macro actions) have proven very useful for speeding up learning, ensuring robustness and building prior knowledge into AI systems. The options framework (Precup, 2000; Sutton, Precup & Singh, 1999) provides a natural way of incorporating such actions into reinforcement learning systems, but leaves open the issue of howgood options might be identified. In this paper, we empirically explore a simple approach to creating options. The underlying assumption is that the agent will be asked to perform different goalachievement tasks in an environment that is otherwise the same over time. Our approach is based on the intuition that states that are frequently visited on system trajectories, could prove to be useful subgoals (e.g., McGovern & Barto, 2001; Iba, 1989).We propose a greedy algorithm for identifying subgoals based on state visitation counts. We present empirical studies of this approach in two gridworld navigation tasks. One of the environments we explored contains bottleneck states, and the algorithm indeed finds these states, as expected. The second environment is an empty gridworld with no obstacles. Although the environment does not contain any obvious subgoals, our approach still finds useful options, which essentially allow the agent to explore the environment more quickly." ] }
1908.01007
2966576392
Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning IML, wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice, Feedback Arbitration and Newtonian Action Advice, under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. Training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.
We build off two LfA algorithms in our exploration of perceptual aliasing in 3D virtual environments. The first, Feedback Arbitration @cite_7 is based on a standard deep reinforcement learning technique called Deep @math Networks (DQN). Deep @math Networks @cite_8 use a convolutional neural network to learn visual features in the state that correspond to the utility of different actions---called a @math -value. A Feedback Arbitration agent alternates between exploring the environment, exploiting it's @math network, and listening to a human teacher depending on (a) its confidence in its own @math network and (b) it's learned confidence in the human trainer. Feedback Arbitration was tested in very simple 3D grid-world environment with landmarking features which helped lower perceptual aliasing.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2785325870", "2962742544", "2949191055", "2180092181" ], "abstract": [ "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.", "We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal. Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions.", "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call \"percepts\" using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features." ] }
1908.01007
2966576392
Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning IML, wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice, Feedback Arbitration and Newtonian Action Advice, under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. Training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.
The second, the Newtonian Action Advice algorithm @cite_11 is built on tabular @math -learning, where a table of @math -values is stored for every state and action. found that human trainers perceived a Newtonian Action Advice agent as more intelligent, more transparent, more performant and less frustrating than a standard @math -learning agent. The standard @math -learning algorithm is enhanced such that if a human trainer provides action advice the agent takes the advice depending on its confidence in its @math -table. If there is no action advice, the agent continues to follow the last received advice for @math additional timesteps. There is no neural component to NAA and requires an enumerable state space; as such it is not directly scalable to 3D environments. We extend this approach to work with a DQN instead of a @math -table, allowing it to operate in 3D graphical environments.
{ "cite_N": [ "@cite_11" ], "mid": [ "2963571918", "2549514639", "1554120378", "2963771109" ], "abstract": [ "A goal of Interactive Machine Learning is to enable people without specialized training to teach agents how to perform tasks. Many of the existing algorithms that learn from human instructions are evaluated using simulated feedback and focus on how quickly the agent learns. While this is valuable information, it ignores important aspects of the human-agent interaction such as frustration. To correct this, we propose a method for the design and verification of interactive algorithms that includes a human-subject study that measures the human's experience working with the agent. In this paper, we present Newtonian Action Advice, a method of incorporating human verbal action advice with Reinforcement Learning in a way that improves the human-agent interaction. In addition to simulations, we validated the Newtonian Action Advice algorithm by conducting a human-subject experiment. The results show that Newtonian Action Advice can perform better than Policy Shaping, a state-of-the-art IML algorithm, both in terms of RL metrics like cumulative reward and human factors metrics like frustration.", "In order for robots to learn from people with no machine learning expertise, robots should learn from natural human instruction. Most machine learning techniques that incorporate explanations require people to use a limited vocabulary and provide state information, even if it is not intuitive. This paper discusses a software agent that learned to play the Mario Bros. game using explanations. Our goals to improve learning from explanations were twofold: 1) to filter explanations into advice and warnings and 2) to learn policies from sentences without state information. We used sentiment analysis to filter explanations into advice of what to do and warnings of what to avoid. We developed object-focused advice to represent what actions the agent should take when dealing with objects. A reinforcement learning agent used object-focused advice to learn policies that maximized its reward. After mitigating false negatives, using sentiment as a filter was approximately 85 accurate. object-focused advice performed better than when no advice was given, the agent learned where to apply the advice, and the agent could recover from adversarial advice. We also found the method of interaction should be designed to ease the cognitive load of the human teacher or the advice may be of poor quality.", "Learning to act optimally in a complex, dynamic and noisy environment is a hard problem. Various threads of research from reinforcement learning, animal conditioning, operations research, machine learning, statistics and optimal control are beginning to come together to offer solutions to this problem. I present a thesis in which novel algorithms are presented for learning the dynamics, learning the value function, and selecting good actions for Markov decision processes. The problems considered have high-dimensional factored state and action spaces, and are either fully or partially observable. The approach I take is to recognize similarities between the problems being solved in the reinforcement learning and graphical models literature, and to use and combine techniques from the two fields in novel ways. In particular I present two new algorithms. First, the DBN algorithm learns a compact representation of the core process of a partially observable MDP. Because inference in the DBN is intractable, I use approximate inference to maintain the belief state. A belief state action-value function is learned using reinforcement learning. I show that this DBN algorithm can solve POMDPs with very large state spaces and useful hidden state. Second, the PoE algorithm learns an approximation to value functions over large factored state-action spaces. The algorithm approximates values as (negative) free energies in a product of experts model. The model parameters can be learned efficiently because inference is tractable in a product of experts. I show that good actions can be found even in large factored action spaces by the use of brief Gibbs sampling. These two new algorithms take techniques from the machine learning community and apply them in new ways to reinforcement learning problems. Simulation results show that these new methods can be used to solve very large problems. The DBN method is used to solve a POMDP with a hidden state space and an observation space of size greater than 2180. The DBN model of the core process has 232 states represented as 32 binary variables. The PoE method is used to find actions in action spaces of size 240 .", "Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is easier to learn which actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions." ] }
1908.01010
2965781583
Aleatoric uncertainty is an intrinsic property of ill-posed inverse and imaging problems. Its quantification is vital for assessing the reliability of relevant point estimates. In this paper, we propose an efficient framework for quantifying aleatoric uncertainty for deep residual learning and showcase its significant potential on image restoration. In the framework, we divide the conditional probability modeling for the residual variable into a deterministic homo-dimensional level, a stochastic low-dimensional level and a merging level. The low-dimensionality is especially suitable for sparse correlation between image pixels, enables efficient sampling for high dimensional problems and acts as a regularizer for the distribution. Preliminary numerical experiments show that the proposed method can give not only state-of-the-art point estimates of image restoration but also useful associated uncertainty information.
Random generators are an inherent part of deep generative models, e.g. VAEs and GANs. Since generative distributions are unconditional, samples are directly generated in a latent space and transformed into the target space. In conditional cases, the latent distributions are conditioned on the input variable which lives in a high-dimensional space for image restoration. The work @cite_27 also renders the stochasticity into the low-dimensional space but directly connect the input with low-dimensional samples. This is equivalent to reducing the model capacity at the homo-dimensional level and thus increases computation complexity needed for massive samples at the merging level. The idea of the multi-level random generator in our framework originates from @cite_9 , where the authors combined it with a U-Net @cite_59 to generate segmentation logits and attributed the efficiency to only involving a small part of the network for repetitive sampling. In this work, we further analyze the idea and use it for residual learning, and pinpoint the structural regularization for distribution modeling.
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_59" ], "mid": [ "2787223504", "2767375635", "2523469089", "2964268978" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. Our experimental results validate that the proposed operations give higher quality samples compared to the original operations.", "As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines." ] }
1908.01010
2965781583
Aleatoric uncertainty is an intrinsic property of ill-posed inverse and imaging problems. Its quantification is vital for assessing the reliability of relevant point estimates. In this paper, we propose an efficient framework for quantifying aleatoric uncertainty for deep residual learning and showcase its significant potential on image restoration. In the framework, we divide the conditional probability modeling for the residual variable into a deterministic homo-dimensional level, a stochastic low-dimensional level and a merging level. The low-dimensionality is especially suitable for sparse correlation between image pixels, enables efficient sampling for high dimensional problems and acts as a regularizer for the distribution. Preliminary numerical experiments show that the proposed method can give not only state-of-the-art point estimates of image restoration but also useful associated uncertainty information.
In many VAE CVAE applications and their variants, a multiplier @math to the KL term is often introduced. The work @cite_54 identified the KL vanishing problem and proposed a sigmoid annealing scheme, and the work @cite_37 suggested a linear annealing scheme. By annealing the multiplier @math , the objective function converges to the original one. This multiplier @math can also be introduced by a relaxed version of the objective function @cite_49 @cite_55 as a Lagrange multiplier. Our framework naturally allows a hyperparameter @math with the original loss and interpret it as a presumed variance without compromising the original interpretation of variational inference.
{ "cite_N": [ "@cite_55", "@cite_54", "@cite_37", "@cite_49" ], "mid": [ "1961231572", "2962900302", "1976392590", "2952038943" ], "abstract": [ "We study the approximability of multiway partitioning problems, examples of which include Multiway Cut, Node-weighted Multiway Cut, and Hypergraph Multiway Cut. We investigate these problems from the point of view of two possible generalizations: as Min-CSPs, and as Submodular Multiway Partition problems. These two generalizations lead to two natural relaxations that we call respectively the Local Distribution LP, and the Lovasz relaxation. The Local Distribution LP is generally stronger than the Lovasz relaxation, but applicable only to Min-CSP with predicates of constant size. The relaxations coincide in some cases such as Multiway Cut where they are both equivalent to the CKR relaxation. We show that the Lovasz relaxation gives a (2--2 k)-approximation for Submodular Multiway Partition with k terminals, improving a recent 2-approximation [2]. We prove that this factor is optimal in two senses: (1) A (2--2 k -- e)-approximation for Submodular Multiway Partition with k terminals would require exponentially many value queries (in the oracle model), or imply NP = RP (for certain explicit submodular functions). (2) For Hypergraph Multiway Cut and Node-weighted Multiway Cut with k terminals, both special cases of Submodular Multiway Partition, we prove that a (2--2 k -- e)-approximation is NP-hard, assuming the Unique Games Conjecture. Both our hardness results are more general: (1) We show that the notion of symmetry gap, previously used for submodular maximization problems [19, 6], also implies hardness results for submodular minimization problems. (2) Assuming the Unique Games Conjecture, we show that the Local Distribution LP gives an optimal approximation for every Min-CSP that includes the Not-Equal predicate. Finally, we connect the two hardness techniques by proving that the integrality gap of the Local Distribution LP coincides with the symmetry gap of the multilinear relaxation (for a related instance). This shows that the appearance of the same hardness threshold for a Min-CSP and the related submodular minimization problem is not a coincidence.", "We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.", "Multipolynomial resultants provide the most efficient methods known (in terms as asymptoticcomplexity) for solving certain systems of polynomial equations or eliminating variables (, 1988). The resultant of f\"1, ..., f\"n in K[x\"1,...,x\"m] will be a polynomial in m-n+1 variables which is zero when the system f\"1=0 has a solution in ^m ( the algebraic closure of K). Thus the resultant defines a projection operator from ^m to ^(^m^-^n^+^1^). However, resultants are only exact conditions for homogeneous systems, and in the affine case just mentioned, the resultant may be zero even if the system has no affine solution. This is most serious when the solution set of the system of polynomials has ''excess components'' (components of dimension >m-n), which may not even be affine, since these cause the resultant to vanish identically. In this paper we describe a projection operator which is not identically zero, but which is guaranteed to vanish on all the proper (dimension=m-n) components of the system f\"i=0. Thus it fills the role of a general affine projection operator or variable elimination ''black box'' which can be used for arbitrary polynomial systems. The construction is based on a generalisation of the characteristic polynomial of a linear system to polynomial systems. As a corollary, we give a single-exponential time method for finding all the isolated solution points of a system of polynomials, even in the presence of infinitely many solutions, at infinity or elsewhere.", "Let x be a random vector coming from any k-wise independent distribution over -1,1 ^n. For an n-variate degree-2 polynomial p, we prove that E[sgn(p(x))] is determined up to an additive epsilon for k = poly(1 epsilon). This answers an open question of (FOCS 2009). Using standard constructions of k-wise independent distributions, we obtain a broad class of explicit generators that epsilon-fool the class of degree-2 threshold functions with seed length log(n)*poly(1 epsilon). Our approach is quite robust: it easily extends to yield that the intersection of any constant number of degree-2 threshold functions is epsilon-fooled by poly(1 epsilon)-wise independence. Our results also hold if the entries of x are k-wise independent standard normals, implying for example that bounded independence derandomizes the Goemans-Williamson hyperplane rounding scheme. To achieve our results, we introduce a technique we dub multivariate FT-mollification, a generalization of the univariate form introduced by (SODA 2010) in the context of streaming algorithms. Along the way we prove a generalized hypercontractive inequality for quadratic forms which takes the operator norm of the associated matrix into account. These techniques may be of independent interest." ] }
1908.00953
2966834907
Most of the current active queue management (AQM) designs have major issues including severe hardship of being tuned for highly fluctuated cellular access link bandwidths. Consequently, most of the cellular network providers either give up using AQMs or use conservative offline configurations for them. However, these choices will significantly impact the performance of the emerging interactive and highly delay sensitive applications such as virtual reality and vehicle-to-vehicle communications. Therefore, in this paper, we investigate the problems of existing AQM schemes and show that they are not suitable options to support ultra-low latency applications in a highly dynamic network such as current and future cellular networks. Moreover, we believe that achieving good performance does not necessarily come from complex drop rate calculation algorithms or complicated AQM techniques. Consequently, we propose BoDe an extremely simple and deployment friendly AQM scheme to bound the queuing delay of served packets and support ultra-low latency applications. We have evaluated BoDe in extensive trace-based evaluations using cellular traces from 3 different service providers in the US and compared its performance with state-of-the-art AQM designs including CoDel and PIE under a variety of streaming applications, video conferencing applications, and various recently proposed TCP protocols. Results show that despite BoDe's simple design, it outperforms other schemes and achieves significantly lower queuing delay in all tested scenarios.
Explicit congestion notification (ECN) is another technique used to do the buffer management indirectly. In other words, ECN-based approaches, use modified switches to tag certain packets and explicitly notify the senders about congestion in the network. Later, senders need to adjust their sending rates to reduce queue occupancy and congestion in the network. ABC @cite_19 is a recent example of ECN-based approach proposed for cellular networks. However, there are two key issues with all ECN-based approaches. First, to employ these approaches in practice, application (or network-stack) at the client server need to be modified to process the ECN bits set by the switches and change the sending rates accordingly. This means that for TCP-based applications, a Kernel patch at all sources destinations is required. The problem becomes worst for UDP-based applications (such as Skype, QUIC @cite_28 etc.) that manage the congestion in the application layer. This means that all these UDP-based applications need to be modified to use the ECN-bits which is not a deployment friendly solution. The second issue is the fact that the network service providers are usually not interested in exposing their network's states (queue occupancy, queue delay, etc.) required by ECN-based schemes to the end-users.
{ "cite_N": [ "@cite_28", "@cite_19" ], "mid": [ "273025837", "2546660520", "2770706713", "2164740236" ], "abstract": [ "Abstract Safety and efficiency applications in vehicular networks rely on the exchange of periodic messages between vehicles. These messages contain position, speed, heading, and other vital information that makes the vehicles aware of their surroundings. The drawback of exchanging periodic cooperative messages is that they generate significant channel load. Decentralized Congestion Control (DCC) algorithms have been proposed to minimize the channel load. However, while the rationale for periodic message exchange is to improve awareness , existing DCC algorithms do not use awareness as a metric for deciding when, at what power, and at what rate the periodic messages need to be sent in order to make sure all vehicles are informed. We propose an environment- and context-aware DCC algorithm combines power and rate control in order to improve cooperative awareness by adapting to both specific propagation environments ( e.g. , urban intersections, open highways, suburban roads) as well as application requirements ( e.g. , different target cooperative awareness range). Studying various operational conditions ( e.g. , speed, direction, and application requirement), ECPR adjusts the transmit power of the messages in order to reach the desired awareness ratio at the target distance while at the same time controlling the channel load using an adaptive rate control algorithm. By performing extensive simulations, including realistic propagation as well as environment modeling and realistic vehicle operational environments (varying demand on both awareness range and rate), we show that ECPR can increase awareness by 20 while keeping the channel load and interference at almost the same level. When permitted by the awareness requirements, ECPR can improve the average message rate by 18 compared to algorithms that perform rate adaptation only.", "Multi-tenant datacenters predominantly use equal-cost multipath (ECMP) routing to distribute traffic over multiple network paths. However, ECMP static hashing causes unequal load-balancing and collisions, leading to low throughput and high latencies. Recently proposed alternatives for load-balancing perform better, but are impractical as they require either changing the tenant VM network stacks (e.g., MPTCP) or replacing all the network switches (e.g., CONGA). In this paper, we argue that the end-host hypervisor provides a sweet spot for implementing a spectrum of load-balancing algorithms that are fine-grained, congestion-aware, and reactive to network dynamics at round-trip timescales. We propose CLOVE, a scalable hypervisor-based load-balancer that requires no changes to guest VMs or to physical network switches. CLOVE uses standard ECMP in the physical network, learns about equal-cost network paths using a traceroute mechanism, and learns about congestion state along these paths using standard switch features such as ECN. It then manipulates packet header fields in the hypervisor virtual switch to route traffic over less congested paths. We introduce different variants of CLOVE that differ in the way they learn about congestion in the physical network. Using extensive simulations, we show that CLOVE captures some 80 of the performance gain of best-of-breed hardware-based load-balancing algorithms without the need for expensive hardware replacement.", "Most datacenters still use Equal Cost Multi-Path (ECMP), which performs congestion-oblivious hashing of flows over multiple paths, leading to an uneven distribution of traffic. Alternatives to ECMP come with deployment challenges, as they require either changing the tenant VM network stacks (e.g., MPTCP) or replacing all of the switches (e.g., CONGA). We argue that the hypervisor provides a unique point for implementing load-balancing algorithms that are easy to deploy, while still reacting quickly to congestion. We propose Clove, a scalable load-balancer that (i) runs entirely in the hypervisor, requiring no modifications to tenant VM networking stacks or physical switches, and (ii) works on any topology and adapts quickly to topology changes and traffic shifts. Clove relies on standard ECMP in physical switches, discovers paths using a novel traceroute mechanism, uses software-based flowlet-switching, and continuously learns congestion (or path utilization) state using standard switch features. It then manipulates packet-header fields in the hypervisor switch to direct traffic over less congested paths. Clove achieves 1.5 to 7 times smaller flow-completion times at 70 network load than other load-balancing algorithms that work with existing hardware. Clove also captures some 80 of the performance gain of best-of-breed hardware-based load-balancing algorithms like CONGA that require new equipment.", "Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today's state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impairments that lead to high application latencies, rooted in TCP's demands on the limited buffer space available in data center switches. For example, bandwidth hungry \"background\" flows build up queues at the switches, and thus impact the performance of latency sensitive \"foreground\" traffic. To address these problems, we propose DCTCP, a TCP-like protocol for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. We evaluate DCTCP at 1 and 10Gbps speeds using commodity, shallow buffered switches. We find DCTCP delivers the same or better throughput than TCP, while using 90 less buffer space. Unlike TCP, DCTCP also provides high burst tolerance and low latency for short flows. In handling workloads derived from operational measurements, we found DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems." ] }
1908.00672
2965514655
We show that existing upsampling operators can be unified with the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can recover boundary details much better than other upsampling operators such as bilinear interpolation. By looking at the indices as a function of the feature map, we introduce the concept of learning to index, and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators, without the need of supervision. At the core of this framework is a flexible network module, termed IndexNet, which dynamically predicts indices given an input. Due to its flexibility, IndexNet can be used as a plug-in applying to any off-the-shelf convolutional networks that have coupled downsampling and upsampling stages. We demonstrate the effectiveness of IndexNet on the task of natural image matting where the quality of learned indices can be visually observed from predicted alpha mattes. Results on the Composition-1k matting dataset show that our model built on MobileNetv2 exhibits at least @math improvement over the seminal VGG-16 based deep matting baseline, with less training data and lower model capacity. Code and models has been made available at: this https URL
Upsampling is an essential stage for almost all dense prediction tasks. It has been intensively studied about (decoding). The operator, also known as transposed convolution, was initially used in @cite_15 to visualize convolutional activations and latter introduced to semantic segmentation @cite_40 . To avoid checkerboard artifacts, a follow-up suggestion is the resize+convolution'' paradigm, which has currently become the standard configuration in state-of-the-art semantic segmentation models @cite_9 @cite_28 . Aside from these, @cite_29 and @cite_24 are also two operators that generate sparse indices to guide upsampling. The indices are able to capture and keep boundary information, but the problem is that two operators induce sparsity after upsampling. Convolutional layers with large filter sizes must follow for densification. In addition, ( @math ) was introduced in @cite_5 as a fast and memory-efficient upsampling operator for image super-resolution. @math recovers resolution by rearranging the feature map of size @math to @math .
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_29", "@cite_24", "@cite_40", "@cite_5", "@cite_15" ], "mid": [ "2412782625", "2952865063", "2950510876", "2592939477" ], "abstract": [ "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at this https URL .", "Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https: github.com TuSimple TuSimple-DUC." ] }
1908.00672
2965514655
We show that existing upsampling operators can be unified with the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can recover boundary details much better than other upsampling operators such as bilinear interpolation. By looking at the indices as a function of the feature map, we introduce the concept of learning to index, and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators, without the need of supervision. At the core of this framework is a flexible network module, termed IndexNet, which dynamically predicts indices given an input. Due to its flexibility, IndexNet can be used as a plug-in applying to any off-the-shelf convolutional networks that have coupled downsampling and upsampling stages. We demonstrate the effectiveness of IndexNet on the task of natural image matting where the quality of learned indices can be visually observed from predicted alpha mattes. Results on the Composition-1k matting dataset show that our model built on MobileNetv2 exhibits at least @math improvement over the seminal VGG-16 based deep matting baseline, with less training data and lower model capacity. Code and models has been made available at: this https URL
Our work is primarily inspired by the unpooling operator @cite_24 . We remark that, it is important to keep the spatial information before loss of such information occurred in feature map downsampling, and more importantly, to use stored information during upsampling. Unpooling shows a simple and effective case of doing this, but we argue there is much room to improve. In this paper, we illustrate that the unpooling operator is a special form of index function, and we can learn an index function beyond unpooling.
{ "cite_N": [ "@cite_24" ], "mid": [ "2559156603", "2591997370", "2962832028", "2964012402" ], "abstract": [ "Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two step procedure: first, a pooling window (e.g., 2&#xd7; 2) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for learning (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models.", "To reduce the cost of storing, processing, and visualizing a large-scale point cloud, we propose a randomized resampling strategy that selects a representative subset of points while preserving application-dependent features. The strategy is based on graphs, which can represent underlying surfaces and lend themselves well to efficient computation. We use a general feature-extraction operator to represent application-dependent features and propose a general reconstruction error to evaluate the quality of resampling; by minimizing the error, we obtain a general form of optimal resampling distribution. The proposed resampling distribution is guaranteed to be shift-, rotation- and scale-invariant in the three-dimensional space. We then specify the feature-extraction operator to be a graph filter and study specific resampling strategies based on all-pass, low-pass, high-pass graph filtering and graph filter banks. We validate the proposed methods on three applications: Large-scale visualization, accurate registration, and robust shape modeling demonstrating the effectiveness and efficiency of the proposed resampling methods.", "For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared with bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.", "Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network's discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detail-preserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches." ] }
1908.00672
2965514655
We show that existing upsampling operators can be unified with the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can recover boundary details much better than other upsampling operators such as bilinear interpolation. By looking at the indices as a function of the feature map, we introduce the concept of learning to index, and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators, without the need of supervision. At the core of this framework is a flexible network module, termed IndexNet, which dynamically predicts indices given an input. Due to its flexibility, IndexNet can be used as a plug-in applying to any off-the-shelf convolutional networks that have coupled downsampling and upsampling stages. We demonstrate the effectiveness of IndexNet on the task of natural image matting where the quality of learned indices can be visually observed from predicted alpha mattes. Results on the Composition-1k matting dataset show that our model built on MobileNetv2 exhibits at least @math improvement over the seminal VGG-16 based deep matting baseline, with less training data and lower model capacity. Code and models has been made available at: this https URL
In the past decades, image matting methods have been extensively studied from a low-level view @cite_13 @cite_26 @cite_27 @cite_25 @cite_46 @cite_49 @cite_38 @cite_19 @cite_39 ; and particularly, they have been designed to solve the matting equation. Despite being theoretically elegant, these methods heavily rely on the color cues, rendering failures of matting in general natural scenes where colors cannot be used as reliable cues.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_39", "@cite_19", "@cite_27", "@cite_49", "@cite_46", "@cite_13", "@cite_25" ], "mid": [ "2213889652", "2520247582", "2035773017", "2604469346" ], "abstract": [ "We introduce a novel method of video matting via sparse and low-rank representation. Previous matting methods [10, 9] introduced a nonlocal prior to estimate the alpha matte and have achieved impressive results on some data. However, on one hand, searching inadequate or excessive samples may miss good samples or introduce noise, on the other hand, it is difficult to construct consistent nonlocal structures for pixels with similar features, yielding spatially and temporally inconsistent video mattes. In this paper, we proposed a novel video matting method to achieve spatially and temporally consistent matting result. Toward this end, a sparse and low-rank representation model is introduced to pursue consistent nonlocal structures for pixels with similar features. The sparse representation is used to adaptively select best samples and accurately construct the nonlocal structures for all pixels, while the low-rank representation is used to globally ensure consistent nonlocal structures for pixels with similar features. The two representations are combined to generate consistent video mattes. Experimental results show that our method has achieved high quality results in a variety of challenging examples featuring illumination changes, feature ambiguity, topology changes, transparency variation, dis-occlusion, fast motion and motion blur.", "We propose a deep Convolutional Neural Networks (CNN) method for natural image matting. Our method takes results of the closed form matting, results of the KNN matting and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs, and reconstructed alpha mattes. We analyze pros and cons of the closed form matting, and the KNN matting in terms of local and nonlocal principle, and show that they are complementary to each other. A major benefit of our method is that it can “recognize” different local image structures, and then combine results of local (closed form matting), and nonlocal (KNN matting) matting effectively to achieve higher quality alpha mattes than both of its inputs. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. In addition, our method has achieved the highest ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors.", "Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity (\"alpha matte\") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.", "Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an image and the corresponding trimap as inputs and predict the alpha matte of the image. The second part is a small convolutional network that refines the alpha matte predictions of the first network to have more accurate alpha values and sharper edges. In addition, we also create a large-scale image matting dataset including 49300 training images and 1000 testing images. We evaluate our algorithm on the image matting benchmark, our testing set, and a wide variety of real images. Experimental results clearly demonstrate the superiority of our algorithm over previous methods." ] }
1908.00672
2965514655
We show that existing upsampling operators can be unified with the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can recover boundary details much better than other upsampling operators such as bilinear interpolation. By looking at the indices as a function of the feature map, we introduce the concept of learning to index, and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators, without the need of supervision. At the core of this framework is a flexible network module, termed IndexNet, which dynamically predicts indices given an input. Due to its flexibility, IndexNet can be used as a plug-in applying to any off-the-shelf convolutional networks that have coupled downsampling and upsampling stages. We demonstrate the effectiveness of IndexNet on the task of natural image matting where the quality of learned indices can be visually observed from predicted alpha mattes. Results on the Composition-1k matting dataset show that our model built on MobileNetv2 exhibits at least @math improvement over the seminal VGG-16 based deep matting baseline, with less training data and lower model capacity. Code and models has been made available at: this https URL
With the tremendous success of deep CNNs in high-level vision tasks @cite_31 @cite_0 @cite_40 , deep matting methods are emerging. Some initial attempts appeared in @cite_11 and @cite_22 , where classic matting approaches, such as closed-form matting @cite_19 and KNN matting @cite_26 , are still used as the backends in deep networks. Although the networks are trained end-to-end and can extract powerful features, the final performance is limited by the conventional backends. These attempts may be thought as semi-deep matting. Recently fully-deep image matting was proposed @cite_2 . In @cite_2 the authors presented the first deep image matting approach based on SegNet @cite_24 and significantly outperformed other competitors. Interestingly, this SegNet-based architecture becomes the standard configuration in many recent deep matting methods @cite_48 @cite_1 @cite_4 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_48", "@cite_1", "@cite_0", "@cite_19", "@cite_40", "@cite_24", "@cite_2", "@cite_31", "@cite_11" ], "mid": [ "2520247582", "300523764", "2563705555", "2951402970" ], "abstract": [ "We propose a deep Convolutional Neural Networks (CNN) method for natural image matting. Our method takes results of the closed form matting, results of the KNN matting and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs, and reconstructed alpha mattes. We analyze pros and cons of the closed form matting, and the KNN matting in terms of local and nonlocal principle, and show that they are complementary to each other. A major benefit of our method is that it can “recognize” different local image structures, and then combine results of local (closed form matting), and nonlocal (KNN matting) matting effectively to achieve higher quality alpha mattes than both of its inputs. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. In addition, our method has achieved the highest ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors.", "We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multi-scale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even \"off-the-self\" multi-scale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9 and 9.5 , respectively.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date." ] }
1908.00672
2965514655
We show that existing upsampling operators can be unified with the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can recover boundary details much better than other upsampling operators such as bilinear interpolation. By looking at the indices as a function of the feature map, we introduce the concept of learning to index, and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators, without the need of supervision. At the core of this framework is a flexible network module, termed IndexNet, which dynamically predicts indices given an input. Due to its flexibility, IndexNet can be used as a plug-in applying to any off-the-shelf convolutional networks that have coupled downsampling and upsampling stages. We demonstrate the effectiveness of IndexNet on the task of natural image matting where the quality of learned indices can be visually observed from predicted alpha mattes. Results on the Composition-1k matting dataset show that our model built on MobileNetv2 exhibits at least @math improvement over the seminal VGG-16 based deep matting baseline, with less training data and lower model capacity. Code and models has been made available at: this https URL
SegNet is effective in matting but also computation-expensive and memory-inefficient. For instance, the inference can only be executed on CPU when testing high-resolution images, which is practically unattractive. We show that, with our proposed IndexNet, even a lightweight backbone such as MobileNetv2-based model can surpass the VGG-16 based method in @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1910657905", "2963881378", "360623563", "2213889652" ], "abstract": [ "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at this http URL", "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .", "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.", "We introduce a novel method of video matting via sparse and low-rank representation. Previous matting methods [10, 9] introduced a nonlocal prior to estimate the alpha matte and have achieved impressive results on some data. However, on one hand, searching inadequate or excessive samples may miss good samples or introduce noise, on the other hand, it is difficult to construct consistent nonlocal structures for pixels with similar features, yielding spatially and temporally inconsistent video mattes. In this paper, we proposed a novel video matting method to achieve spatially and temporally consistent matting result. Toward this end, a sparse and low-rank representation model is introduced to pursue consistent nonlocal structures for pixels with similar features. The sparse representation is used to adaptively select best samples and accurately construct the nonlocal structures for all pixels, while the low-rank representation is used to globally ensure consistent nonlocal structures for pixels with similar features. The two representations are combined to generate consistent video mattes. Experimental results show that our method has achieved high quality results in a variety of challenging examples featuring illumination changes, feature ambiguity, topology changes, transparency variation, dis-occlusion, fast motion and motion blur." ] }
1908.00677
2964947356
We study the Riemannian quantiative isoperimetric inequality. We show that direct analogue of the Euclidean quantitative isoperimetric inequality is--in general--false on a closed Riemannian manifold. In spite of this, we show that the inequality is true generically. Moreover, we show that a modified (but sharp) version of the quantitative isoperimetric inequality holds for a real analytic metric, using the Lojasiewicz-Simon inequality. A main novelty of our work is that in all our results we do not require any a priori knowledge on the structure shape of the minimizers.
As mentioned above, there has been a lot of recent work on quantitative stability not just for the isoperimetric inequality but also for many other geometric (e.g. Brunn-Minkowski @cite_3 ), spectral (e.g. Faber-Krahn @cite_30 ), and functional (e.g. Sobolev @cite_33 ) inequalities. We defer to the recent survey of Fusco @cite_13 for a more comprehensive list. When the underlying space and the extremizers are highly symmetric these results are often proven by symmetrization or rearrangement (see, e.g. @cite_23 ). In this vein we'd also like to point out the works @cite_35 @cite_0 , which do not use symmetrization techniques but do exploit the richness of the symmetry group of the underlying space.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_3", "@cite_0", "@cite_23", "@cite_13" ], "mid": [ "2083945855", "2609098951", "1935847907", "1988342708" ], "abstract": [ "Abstract We investigate the recovery of almost s -sparse vectors x ∈ C N from undersampled and inaccurate data y = A x + e ∈ C m by means of minimizing ‖ z ‖ 1 subject to the equality constraints A z = y . If m ≍ s ln ( N s ) and if Gaussian random matrices A ∈ R m × N are used, this equality-constrained l 1 -minimization is known to be stable with respect to sparsity defects and robust with respect to measurement errors. If m ≍ s ln ( N s ) and if Weibull random matrices are used, we prove here that the equality-constrained l 1 -minimization remains stable and robust. The arguments are based on two key ingredients, namely the robust null space property and the quotient property. The robust null space property relies on a variant of the classical restricted isometry property where the inner norm is replaced by the l 1 -norm and the outer norm is replaced by a norm comparable to the l 2 -norm. For the l 1 -minimization subject to inequality constraints, this yields stability and robustness results that are also valid when considering sparsity relative to a redundant dictionary. As for the quotient property, it relies on lower estimates for the tail probability of sums of independent Weibull random variables.", "The full' edge isoperimetric inequality for the discrete cube (due to Harper, Bernstein, Lindsay and Hart) specifies the minimum size of the edge boundary @math of a set @math , as a function of @math . A weaker (but more widely-used) lower bound is @math , where equality holds iff @math is a subcube. In 2011, the first author obtained a sharp stability' version of the latter result, proving that if @math , then there exists a subcube @math such that @math . The weak' version of the edge isoperimetric inequality has the following well-known generalization for the @math -biased' measure @math on the discrete cube: if @math , or if @math and @math is monotone increasing, then @math . In this paper, we prove a sharp stability version of the latter result, which generalizes the aforementioned result of the first author. Namely, we prove that if @math , then there exists a subcube @math such that @math , where @math . This result is a central component in recent work of the authors proving sharp stability versions of a number of Erd o s-Ko-Rado type theorems in extremal combinatorics, including the seminal complete intersection theorem' of Ahlswede and Khachatrian. In addition, we prove a biased-measure analogue of the full' edge isoperimetric inequality, for monotone increasing sets, and we observe that such an analogue does not hold for arbitrary sets, hence answering a question of Kalai. We use this result to give a new proof of the full' edge isoperimetric inequality, one relying on the Kruskal-Katona theorem.", "We present some recent stability results concerning the isoperimetric inequality and other related geometric and functional inequalities. The main techniques and approaches to this field are discussed.", "By introducing a quantitative degree of commutativity'' in terms of the angle between spin observables we present two tight quantitative trade-off relations in the case of two qubits. First, for entangled states, between the degree of commutativity of local observables and the maximal amount of violation of the Bell inequality: if both local angles increase from zero to @math (i.e., the degree of local commutativity decreases), the maximum violation of the Bell inequality increases. Secondly, a converse trade-off relation holds for separable states: if both local angles approach @math the maximal value obtainable for the correlations in the Bell inequality decreases and thus the non-violation increases. As expected, the extremes of these relations are found in the case of anticommuting local observables where, respectively, the bounds of @math and @math hold for the expectation value of the Bell operator. The trade-off relations show that noncommmutativity gives a more than classical result'' for entangled states, whereas a less than classical result'' is obtained for separable states. The experimental relevance of the trade-off relation for separable states is that it provides an experimental test for two qubit entanglement. Its advantages are twofold: in comparison to violations of Bell inequalities it is a stronger criterion and in comparison to entanglement witnesses it needs to make less strong assumptions about the observables implemented in the experiment." ] }
1908.00677
2964947356
We study the Riemannian quantiative isoperimetric inequality. We show that direct analogue of the Euclidean quantitative isoperimetric inequality is--in general--false on a closed Riemannian manifold. In spite of this, we show that the inequality is true generically. Moreover, we show that a modified (but sharp) version of the quantitative isoperimetric inequality holds for a real analytic metric, using the Lojasiewicz-Simon inequality. A main novelty of our work is that in all our results we do not require any a priori knowledge on the structure shape of the minimizers.
In the anisotropic setting, optimal transport techniques have been used with great success (see, e.g. @cite_41 ). However, usually convexity of the extremizers is required (e.g. to guarantee the necessary regularity on the transport map). Other techniques, such as the selection principle, often require understanding the spectrum of the relevant energy linearized around the extremizers (to obtain estimates like Fuglede's, @cite_4 ). In the generality we consider here, there is very little one can say about the structure of the extremizers (i.e. isoperimetric regions) or symmetry of the underlying space. This lack of knowledge is our primary technical obstacle.
{ "cite_N": [ "@cite_41", "@cite_4" ], "mid": [ "1602724287", "2765207424", "1695597346", "2009172320" ], "abstract": [ "In this paper we address the problem of dense depth map estimation from sparse noisy range data to reconstruct large heterogeneous outdoor scenes. We propose a surface inpainting solution through energy minimisation with an adaptive selection of surface regularisers among a set of well known convex and non-convex regularisers. In fact, the selection of norm is pivotal with respect to the intrinsic surface characteristics. Our goal is to show how dense interpolation of sparse range data can be leveraged of more exotic and non-convex regularisers such as the log and logTGV [1] which can better capture the scene geometry. In contrast to state of the art solutions, we do not restrict ourselves to this set of norms, instead we search for the most apt norm for each semantically segmented part of the scene. Our energy model selection use Bayesian optimisation to learn the best choice of free parameters. This results in an adaptive model selection and the generalisation of well studied regularisation norms. We conclude with a detailed experimental analysis of our approach using a basis of four norms over a set of challenging outdoor scenes.", "Entropic regularization is quickly emerging as a new standard in optimal transport (OT). It enables to cast the OT computation as a differentiable and unconstrained convex optimization problem, which can be efficiently solved using the Sinkhorn algorithm. However, entropy keeps the transportation plan strictly positive and therefore completely dense, unlike unregularized OT. This lack of sparsity can be problematic in applications where the transportation plan itself is of interest. In this paper, we explore regularizing the primal and dual OT formulations with a strongly convex term, which corresponds to relaxing the dual and primal constraints with smooth approximations. We show how to incorporate squared @math -norm and group lasso regularizations within that framework, leading to sparse and group-sparse transportation plans. On the theoretical side, we bound the approximation error introduced by regularizing the primal and dual formulations. Our results suggest that, for the regularized primal, the approximation error can often be smaller with squared @math -norm than with entropic regularization. We showcase our proposed framework on the task of color transfer.", "The formation of secure transportation corridors, where cargoes and shipments from points of entry can be dispatched safely to highly sensitive and secure locations, is a high national priority. One of the key tasks of the program is the detection of anomalous cargo based on sensor readings in truck weigh stations. Due to the high variability, dimensionality, and or noise content of sensor data in transportation corridors, appropriate feature representation is crucial to the success of anomaly detection methods in this domain. In this paper, we empirically investigate the usefulness of manifold embedding methods for feature representation in anomaly detection problems in the domain of transportation corridors. We focus on both linear methods, such as multi-dimensional scaling (MDS), as well as nonlinear methods, such as locally linear embedding (LLE) and isometric feature mapping (ISOMAP). Our study indicates that such embedding methods provide a natural mechanism for keeping anomalous points away from the dense normal regions in the embedding of the data. We illustrate the efficacy of manifold embedding methods for anomaly detection through experiments on simulated data as well as real truck data from weigh stations.", "This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. Our main contribution is to show that optimal transportation can be made tractable over large domains used in graphics, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work. To this end, we approximate optimal transportation distances using entropic regularization. The resulting objective contains a geodesic distance-based kernel that can be approximated with the heat kernel. This approach leads to simple iterative numerical schemes with linear convergence, in which each iteration only requires Gaussian convolution or the solution of a sparse, pre-factored linear system. We demonstrate the versatility and efficiency of our method on tasks including reflectance interpolation, color transfer, and geometry processing." ] }
1908.00310
2966086679
This paper considers the problem of estimating a power-law degree distribution of an undirected network. Although power-law degree distributions are ubiquitous in nature, the widely used parametric methods for estimating them (e.g. linear regression on double-logarithmic axes, maximum likelihood estimation with uniformly sampled nodes) suffer from the large variance introduced by the lack of data-points from the tail portion of the power-law degree distribution. As a solution, we present a novel maximum likelihood estimation approach that exploits the to sample more efficiently from the tail of the degree distribution. We analytically show that the proposed method results in a smaller bias, variance and a Cram e r-Rao lower bound compared to the maximum-likelihood estimate obtained with uniformly sampled nodes (which is the commonly used method in literature). Detailed simulation results are presented to illustrate the performance of the proposed method under different conditions and how it compares with alternative methods.
It has been shown in the literature that maximum likelihood methods are more suitable for estimating power-law degree distributions of the form in Eq. compared to alternative methods @cite_34 @cite_36 @cite_53 . Further, exploiting the friendship paradox (Theorem ) has shown to be effective in empirical estimation of heavy-tailed degree distributions by including more high degree nodes into the sample @cite_50 . Motivated by these findings, this paper combines maximum likelihood estimation with friendship paradox based sampling in a principled manner to obtain an asymptotically unbiased, strongly consistent and statistically efficient estimate that outperforms the state of the art.
{ "cite_N": [ "@cite_36", "@cite_53", "@cite_34", "@cite_50" ], "mid": [ "1493228775", "2097769384", "2027963417", "2952332158" ], "abstract": [ "Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.", "Motivated by a broad range of potential applications in topological and geometric inference, we introduce a weighted version of the k-nearest neighbor density estimate. Various pointwise consistency results of this estimate are established. We present a general central limit theorem under the lightest possible conditions. In addition, a strong approximation result is obtained and the choice of the optimal set of weights is discussed. In particular, the classical k-nearest neighbor estimate is not optimal in a sense described in the manuscript. The proposed method has been implemented to recover level sets in both simulated and real-life data.", "This short communication uses a simple experiment to show that fitting to a power law distribution by using graphical methods based on linear fit on the log-log scale is biased and inaccurate. It shows that using maximum likelihood estimation (MLE) is far more robust. Finally, it presents a new table for performing the Kolmogorov-Smirnov test for goodness-of-fit tailored to power-law distributions in which the power-law exponent is estimated using MLE. The techniques presented here will advance the application of complex network theory by allowing reliable estimation of power-law models from data and further allowing quantitative assessment of goodness-of-fit of proposed power-law models to empirical data.", "In this paper, we address the problem of quick detection of high-degree entities in large online social networks. Practical importance of this problem is attested by a large number of companies that continuously collect and update statistics about popular entities, usually using the degree of an entity as an approximation of its popularity. We suggest a simple, efficient, and easy to implement two-stage randomized algorithm that provides highly accurate solutions for this problem. For instance, our algorithm needs only one thousand API requests in order to find the top-100 most followed users in Twitter, a network with approximately a billion of registered users, with more than 90 precision. Our algorithm significantly outperforms existing methods and serves many different purposes, such as finding the most popular users or the most popular interest groups in social networks. An important contribution of this work is the analysis of the proposed algorithm using Extreme Value Theory -- a branch of probability that studies extreme events and properties of largest order statistics in random samples. Using this theory, we derive an accurate prediction for the algorithm's performance and show that the number of API requests for finding the top-k most popular entities is sublinear in the number of entities. Moreover, we formally show that the high variability among the entities, expressed through heavy-tailed distributions, is the reason for the algorithm's efficiency. We quantify this phenomenon in a rigorous mathematical way." ] }
1908.00524
2965027008
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
The application of Deep Learning techniques has presented impressive results recently in tasks such as object detection and classification by the use of camera images. To obtain these results, a large amount of datasets has become available in the last years. In the context of intelligent vehicles, interesting work has emerged in different fields: mapping @cite_15 , trajectory prediction @cite_8 , control @cite_18 and even end-to-end approaches @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_18", "@cite_8" ], "mid": [ "2555820268", "1585377561", "2528963632", "2743627947" ], "abstract": [ "Deep Learning based techniques have been adopted with precision to solve a lot of standard computer vision problems, some of which are image classification, object detection and segmentation. Despite the widespread success of these approaches, they have not yet been exploited largely for solving the standard perception related problems encountered in autonomous navigation such as Visual Odometry (VO), Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM). This paper analyzes the problem of Monocular Visual Odometry using a Deep Learning-based framework, instead of the regular 'feature detection and tracking' pipeline approaches. Several experiments were performed to understand the influence of a known unknown environment, a conventional trackable feature and pre-trained activations tuned for object classification on the network's ability to accurately estimate the motion trajectory of the camera (or the vehicle). Based on these observations, we propose a Convolutional Neural Network architecture, best suited for estimating the object's pose under known environment conditions, and displays promising results when it comes to inferring the actual scale using just a single camera in real-time.", "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.", "Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning's application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers.", "The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data." ] }
1908.00524
2965027008
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
At the same time, localization, which is still a very challenging problem for robotic systems, is not yet well explored by deep learning methods. The most common approaches are based on the use of camera images for odometry estimation. They are inspired by the classic methods for Visual Odometry (VO) @cite_2 @cite_17 , which consists in estimating the camera's motion by finding geometry constraints from a sequence of images. The use of machine learning for this purpose allows to deal with challenging environments and camera parameters difficulties. The first method proposed was PoseNet @cite_13 , which it is based on the use of CNNs to estimate the 6-DoF pose using only RGB images. More recently, @cite_14 introduced the DeepVO method, which uses RCNNs with the same goal. The same authors also presented the method UndeepVO @cite_11 , which proposes an unsupervised deep learning method to estimate the pose of a monocular camera. However, the classic VO methods still outperform deep learning based methods published to this date, considering the accuracy in the pose estimation.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_2", "@cite_13", "@cite_17" ], "mid": [ "2909119029", "2771385090", "2593145818", "2749379418" ], "abstract": [ "This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.", "Precise localization of robots is imperative for their safe and autonomous navigation in both indoor and outdoor environments. In outdoor scenarios, the environment typically undergoes significant perceptual changes and requires robust methods for accurate localization. Monocular camera-based approaches provide an inexpensive solution to such challenging problems compared to 3D LiDAR-based methods. Recently, approaches have leveraged deep convolutional neural networks (CNNs) to perform place recognition and they turn out to outperform traditional handcrafted features under challenging perceptual conditions. In this paper, we propose an approach for directly regressing a 6-DoF camera pose using CNNs and a single monocular RGB image. We leverage the idea of transfer learning for training our network as this technique has shown to perform better when the number of training samples are not very high. Furthermore, we propose novel data augmentation in 3D space for additional pose coverage which leads to more accurate localization. In contrast to the traditional visual metric localization approaches, our resulting map size is constant with respect to the database. During localization, our approach has a constant time complexity of O(1) and is independent of the database size and runs in real-time at ∼80 Hz using a single GPU. We show the localization accuracy of our approach on publicly available datasets and that it outperforms CNN-based state-of-the-art methods.", "Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases imagesequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets.", "Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases image-sequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets." ] }
1908.00524
2965027008
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
Laser scanners are also popular for classic pose estimation because of its accuracy. Classic approaches for this problem consist in trying to match two point clouds and estimate the transformation between them, this solution is known as Iterative Closest Point (ICP) @cite_16 . Since then, more robust and complex algorithms were presented to achieve the same objective. LOAM @cite_12 is currently one of the most popular method because of its high accuracy and ability to achieve real-time processing by running two different algorithms in parallel.
{ "cite_N": [ "@cite_16", "@cite_12" ], "mid": [ "2132512702", "2023513943", "153084048", "2141827760" ], "abstract": [ "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.", "We present a robust plane finding algorithm that when combined with plane-based frame-to-frame registration gives accurate real-time pose estimation. Our plane extraction is capable of handling large and sparse datasets such as those generated from spinning multi-laser sensors such as the Velodyne HDL-32E LiDAR. We test our algorithm on frame-to-frame registration in a closed-loop indoor path comprising 827 successive 3D laser scans (over 57 million points), using no additional information (e.g., odometry, IMU). Our algorithm outperforms, in both accuracy and time, three state-of-the-art methods, based on iterative closest point (ICP), plane-based randomized Hough transform, and planar region growing.", "We propose a powerful pipeline for determining the pose of a query image relative to a point cloud reconstruction of a large scene consisting of more than one million 3D points. The key component of our approach is an efficient and effective search method to establish matches between image features and scene points needed for pose estimation. Our main contribution is a framework for actively searching for additional matches, based on both 2D-to-3D and 3D-to-2D search. A unified formulation of search in both directions allows us to exploit the distinct advantages of both strategies, while avoiding their weaknesses. Due to active search, the resulting pipeline is able to close the gap in registration performance observed between efficient search methods and approaches that are allowed to run for multiple seconds, without sacrificing run-time efficiency. Our method achieves the best registration performance published so far on three standard benchmark datasets, with run-times comparable or superior to the fastest state-of-the-art methods.", "We propose a novel method for tracking an articulated model in a 3D-point cloud. The tracking problem is formulated as the registration of two point sets, one of them parameterised by the model’s state vector and the other acquired from a 3D-sensor system. Finding the correct parameter vector is posed as a linear estimation problem, which is solved by means of a scaled unscented Kalman filter. Our method draws on concepts from the widely used iterative closest point registration algorithm (ICP), basing the measurement model on point correspondences established between the synthesised model point cloud and the measured 3D-data. We apply the algorithm to kinematically track a model of the human upper body on a point cloud obtained through stereo image processing from one or more stereo cameras. We determine torso position and orientation as well as joint angles of shoulders and elbows. The algorithm has been successfully tested on thousands of frames of real image data. Challenging sequences of several minutes length where tracked correctly. Complete processing time remains below one second per frame." ] }
1908.00524
2965027008
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
The application of deep learning techniques for this purpose using laser scanners is still considered as a new challenge and only few papers have addressed it. @cite_21 were the first to propose to apply 3D laser scanner data in CNNs to estimate odometry. Their approach provides a reasonable estimation of odometry, however still not competitive with the efficiency of state-of-the-art scan matching methods. Later, @cite_10 presented another approach for using CNNs with 3D laser scanners for IMU assisted odometry. Their results were able to get high precision and close results compared to state-of-the-art methods, such as LOAM @cite_12 , for translation, however the method is not able to estimate rotation with sufficient precision. Considering their results, the authors propose that their method could be used as a translation estimator and use together an Inertial Measurement Unit (IMU) to obtain the rotation. Another drawback is that, according to the KITTI benchmark, even using CNNs the method is slower than LOAM.
{ "cite_N": [ "@cite_21", "@cite_10", "@cite_12" ], "mid": [ "2932490017", "2031998160", "2277848489", "2140924050" ], "abstract": [ "The use of 2D laser scanners is attractive for the autonomous driving industry because of its accuracy, light-weight and low-cost. However, since only a 2D slice of the surrounding environment is detected at each scan, it is a challenge to execute important tasks such as the localization of the vehicle. In this paper we present a novel framework that explores the use of deep Recurrent Convolutional Neural Networks (RCNN) for odometry estimation using only 2D laser scanners. The application of RCNNs provides the tools to not only extract the features of the laser scanner data using Convolutional Neural Networks (CNNs), but in addition it models the possible connections among consecutive scans using the Long Short-Term Memory (LSTM) Recurrent Neural Network. Results on a real road dataset show that the method can run in real-time without using GPU acceleration and have competitive performance compared to other methods, being an interesting approach that could complement traditional localization systems.", "In this paper we present a novel algorithm for fast and robust stereo visual odometry based on feature selection and tracking (SOFT). The reduction of drift is based on careful selection of a subset of stable features and their tracking through the frames. Rotation and translation between two consecutive poses are estimated separately. The five point method is used for rotation estimation, whereas the three point method is used for estimating translation. Experimental results show that the proposed algorithm has an average pose error of 1.03 with processing speed above 10 Hz. According to publicly available KITTI leaderboard, SOFT outperforms all other validated methods. We also present a modified IMU-aided version of the algorithm, fast and suitable for embedded systems. This algorithm employs an IMU for outlier rejection and Kalman filter for rotation refinement. Experiments show that the IMU based system runs at 20 Hz on an ODROID U3 ARM-based embedded computer without any hardware acceleration. Integration of all components is described and experimental results are presented.", "Here we propose a real-time method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation (especially without an external reference such as GPS) cause mis-registration of the resulting point cloud. To date, coherent 3D maps have been built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift in motion estimation and low-computational complexity. The key idea that makes this level of performance possible is the division of the complex problem of Simultaneous Localization and Mapping, which seeks to optimize a large number of variables simultaneously, into two algorithms. One algorithm performs odometry at a high-frequency but at low fidelity to estimate velocity of the laser scanner. Although not necessary, if an IMU is available, it can provide a motion prior and mitigate for gross, high-frequency motion. A second algorithm runs at an order of magnitude lower frequency for fine matching and registration of the point cloud. Combination of the two algorithms allows map creation in real-time. Our method has been evaluated by indoor and outdoor experiments as well as the KITTI odometry benchmark. The results indicate that the proposed method can achieve accuracy comparable to the state of the art offline, batch methods.", "Vision-aided inertial navigation systems (V-INSs) can provide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy." ] }
1908.00524
2965027008
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
The use of 2D laser scanners instead of 3D laser scanners could reduce considerably the price of future intelligent vehicles and the need of high computational resources. The authors of this work presented in @cite_5 a solution that relies only on a 2D laser scanner for odometry estimation using Recurrent Convolutional Neural Networks (RCNNs). The method showed promising results along with a very fast computational time. However, one of the main difficulties encountered by this approach was that sometimes in challenging environments the 2D laser scanner could not detect many obstacles and it would generate inaccurate results. Considering this, in this work we propose a new solution to improve the odometry estimation by fusing mono-cameras and 2D laser scanners using only Convolutional Neural Networks.
{ "cite_N": [ "@cite_5" ], "mid": [ "2932490017", "2598706937", "1910014366", "2292228516" ], "abstract": [ "The use of 2D laser scanners is attractive for the autonomous driving industry because of its accuracy, light-weight and low-cost. However, since only a 2D slice of the surrounding environment is detected at each scan, it is a challenge to execute important tasks such as the localization of the vehicle. In this paper we present a novel framework that explores the use of deep Recurrent Convolutional Neural Networks (RCNN) for odometry estimation using only 2D laser scanners. The application of RCNNs provides the tools to not only extract the features of the laser scanner data using Convolutional Neural Networks (CNNs), but in addition it models the possible connections among consecutive scans using the Long Short-Term Memory (LSTM) Recurrent Neural Network. Results on a real road dataset show that the method can run in real-time without using GPU acceleration and have competitive performance compared to other methods, being an interesting approach that could complement traditional localization systems.", "This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.", "We present a new approach to detection and tracking of moving objects with a 2D laser scanner for autonomous driving applications. Objects are modelled with a set of rigidly attached sample points along their boundaries whose positions are initialized with and updated by raw laser measurements, thus allowing a non-parametric representation that is capable of representing objects independent of their classes and shapes. Detection and tracking of such object models are handled in a theoretically principled manner as a Bayes filter where the motion states and shape information of all objects are represented as a part of a joint state which includes in addition the pose of the sensor and geometry of the static part of the world. We derive the prediction and observation models for the evolution of the joint state, and describe how the knowledge of the static local background helps in identifying dynamic objects from static ones in a principled and straightforward way. Dealing with raw laser points poses a significant challenge to data association. We propose a hierarchical approach, and present a new variant of the well-known Joint Compatibility Branch and Bound algorithm to respect and take advantage of the constraints of the problem introduced through correlations between observations. Finally, we calibrate the system systematically on real world data containing 7,500 labelled object examples and validate on 6,000 test cases. We demonstrate its performance over an existing industry standard targeted at the same problem domain as well as a classical approach to model-free object tracking.", "2D laser scanners have been widely used for accomplishing a number of challenging AI and robotics tasks such as mapping of large environments and localization in highly dynamic environments. However, using only one 2D laser scanner could be insufficient and less reliable for accomplishing tasks in 3D environments. The problem could be solved using multiple 2D laser scanners or a 3D laser scanner for performing 3D perception. Unfortunately, the cost of such 3D sensing systems is still too high for enabling AI and robotics applications. In this paper, we propose to use a 2D laser scanner and a stereo camera for accomplishing simultaneous localization and mapping (SLAM) in 3D indoor environments in which the 2D laser scanner is used for SLAM and the stereo camera is used for 3D mapping. The experimental results demonstrate that the proposed system is lower cost yet effective, and the obstacle detection rate is significant improved compares to using one 2D laser scanner for mapping." ] }
1908.00524
2965027008
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
We present the first deep learning method for odometry estimation based on the fusion of a 2D laser scanner and a camera. The proposed network is able to provide a real-time solution that overcome the difficulties enconter by one sensor alone. We also present how to transform the odometry regression problem into a into a series of simpler binary classification subproblems, known as ordinal classification. Finally, we explore this solution in outdoor environments, training and testing it with the KITTI @cite_22 dataset, which contains sequences on different types of scenarios.
{ "cite_N": [ "@cite_22" ], "mid": [ "2909119029", "2296228853", "2754329383", "2794337790" ], "abstract": [ "This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.", "We propose a real-time method for odometry and mapping using range measurements from a 2-axis lidar moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation can cause mis-registration of the resulting point cloud. To date, coherent 3D maps can be built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift and low-computational complexity without the need for high accuracy ranging or inertial measurements. The key idea in obtaining this level of performance is the division of the complex problem of simultaneous localization and mapping, which seeks to optimize a large number of variables simultaneously, by two algorithms. One algorithm performs odometry at a high frequency but low fidelity to estimate velocity of the lidar. Another algorithm runs at a frequency of an order of magnitude lower for fine matching and registration of the point cloud. Combination of the two algorithms allows the method to map in real-time. The method has been evaluated by a large set of experiments as well as on the KITTI odometry benchmark. The results indicate that the method can achieve accuracy at the level of state of the art offline batch methods.", "We present a novel method to fuse the power of deep networks with the computational efficiency of geometric and probabilistic localization algorithms. In contrast to other methods that completely replace a classical visual estimator with a deep network, we propose an approach that uses a convolutional neural network to learn difficult-to-model corrections to the estimator from ground-truth training data. To this end, we derive a novel loss function for learning SE(3) corrections based on a matrix Lie groups approach, with a natural formulation for balancing translation and rotation errors. We use this loss to train a deep pose correction network (DPC-Net) that predicts corrections for a particular estimator, sensor and environment. Using the KITTI odometry dataset, we demonstrate significant improvements to the accuracy of a computationally-efficient sparse stereo visual odometry pipeline, that render it as accurate as a modern computationally-intensive dense estimator. Further, we show how DPC-Net can be used to mitigate the effect of poorly calibrated lens distortion parameters.", "Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner. Recent approaches to single view depth estimation explore the possibility of learning without full supervision via minimizing photometric error. In this paper, we explore the use of stereo sequences for learning depth and visual odometry. The use of stereo sequences enables the use of both spatial (between left-right pairs) and temporal (forward backward) photometric warp error, and constrains the scene depth and camera motion to be in a common, real-world scale. At test time our framework is able to estimate single view depth and two-view odometry from a monocular sequence. We also show how we can improve on a standard photometric warp loss by considering a warp of deep features. We show through extensive experiments that: (i) jointly training for single view depth and visual odometry improves depth prediction because of the additional constraint imposed on depths and achieves competitive results for visual odometry; (ii) deep feature-based warping loss improves upon simple photometric warp loss for both single view depth estimation and visual odometry. Our method outperforms existing learning based methods on the KITTI driving dataset in both tasks. The source code is available at this https URL" ] }
1908.00432
2966501858
Payment channels allow transactions between participants of the blockchain to be executed securely off-chain, and thus provide a promising solution for the scalability problem of popular blockchains. We study the online network design problem for payment channels, assuming a central coordinator. We focus on a single channel, where the coordinator desires to maximize the number of accepted transactions under given capital constraints. Despite the simplicity of the problem, we present a flurry of impossibility results, both for deterministic and randomized algorithms against adaptive as well as oblivious adversaries.
Additionally, the design of payment networks with fees from the viewpoint of a payment service provider who wants to maximise the profit is discussed in @cite_15 . In the contrary to this work where we assume constant fee for every transaction, in @cite_15 , each channel requires a different fee, much like the tolls on a road network. Despite the different assumptions, both works share the same objective, to maximize the profit for the network operator (and designer).
{ "cite_N": [ "@cite_15" ], "mid": [ "2890103577", "2152629569", "1974835443", "2048523876" ], "abstract": [ "Payment channels are the most prominent solution to the blockchain scalability problem. We introduce the problem of network design with fees for payment channels from the perspective of a Payment Service Provider (PSP). Given a set of transactions, we examine the optimal graph structure and fee assignment to maximize the PSP’s profit. A customer prefers to route transactions through the PSP’s network if the cheapest path from sender to receiver is financially interesting, i.e., if the path costs less than the blockchain fee. When the graph structure is a tree, and the PSP facilitates all transactions, the problem can be formulated as a linear program. For a path graph, we present a polynomial time algorithm to assign optimal fees. We also show that the star network, where the center is an additional node acting as an intermediary, is a near-optimal solution to the network design problem.", "We study revenue-maximizing pricing by a service provider in a communication network and compare revenues from simple pricing rules to the maximum revenues that are feasible. In particular, we focus on flat entry fees as the simplest pricing rule. We provide a lower bound for the ratio between the revenue from this pricing rule and maximum revenue, which we refer to as the price of simplicity. We characterize what types of environments lead to a low price of simplicity and show that in a range of environments, the loss of revenue from using simple entry fees is small. We then study the price of simplicity for a simple non-linear pricing (price discrimination) scheme based on the Paris Metro Pricing. The service provider creates different service classes and charges differential entry fees for these classes. We show that the gain from this type of price discrimination is small, particularly in environments in which the simple entry fee pricing leads to a low price of simplicity.", "This paper derives a simple fast algorithm for computing minimal-revenue tolls in a single-origin network. It assumes that trips have the same value of time, that they make user-optimizing path choices, and they have multiple destinations--but come from the same origin. The algorithm finds tolls that induce a traffic pattern minimizing average time per trip at a minimal average toll per trip. Formally, let X be the set of all feasible traffic assignments and assume all trips have the same value of time [alpha]>0; then the algorithm inputs a network with the system-optimizing flow xo vector and associated arc times t(xo). It outputs an optimal arc toll vector c*[greater-or-equal, slanted]0 such that for x[set membership, variant]X:([alpha]t(xo)+c*)T(x-xo)[greater-or-equal, slanted]0  for x[set membership, variant]X (user  optimal) That is, xo[set membership, variant]X becomes user-optimal, too. Compared to marginal-cost tolls, which would also produce this same effect, the min-revenue tolls c* possess important practical advantages. Perforce, they provide the same system-optimal network usage, but at a much cheaper price to the traveler--remarkably, they assure at least one toll-free path to every destination. In addition, min-revenue tolls have admirable stability: even large increases in travel demand can leave the solution c* virtually unchanged--an important feature since traffic volume can change continuously but tolls cannot. Finally, they provide guidance regarding potential network improvements. The algorithm presented is very efficient, easily able to solve networks with tens of thousands of nodes. A greedy potentials calculator, its expected running time for an n-node road network is a speedy O(n), while its worst case time for any network is O(m), where m is the number of arcs.", "The second of a two-part series, this paper derives an efficient solution to the minimal-revenue tolls problem. As introduced in Part I, this problem can be defined as follows: Assuming each trip uses only a path whose generalized cost is smallest, find a set of arc tolls that simultaneously minimizes both average travel time and out-of-pocket cost. As a point of departure, this paper first re-solves the single-origin problem of Part I, modeling it as a linear program. Then with a change of variable, it transforms the LP's dual into a simple longest-path problem on an acyclic network. The multiple-origin problem - where one toll for each arc applies to all origins - solves analogously. In this case, however, the dual becomes an elementary linear multi-commodity max-cost flow problem with an easy bundling constraint and infinite arc capacities. After a minor reformulation that simplifies the model's input to better accommodate output from common traffic assignment software, a solution algorithm is exemplified with a numerical example." ] }
1908.00439
2965874802
In this paper, we tackle the problem of 3D human shape estimation from single RGB images. While the recent progress in convolutional neural networks has allowed impressive results for 3D human pose estimation, estimating the full 3D shape of a person is still an open issue. Model-based approaches can output precise meshes of naked under-cloth human bodies but fail to estimate details and un-modelled elements such as hair or clothing. On the other hand, non-parametric volumetric approaches can potentially estimate complete shapes but, in practice, they are limited by the resolution of the output grid and cannot produce detailed estimates. In this work, we propose a non-parametric approach that employs a double depth map to represent the 3D shape of a person: a visible depth map and a "hidden" depth map are estimated and combined, to reconstruct the human 3D shape as done with a "mould". This representation through 2D depth maps allows a higher resolution output with a much lower dimension than voxel-based volumetric representations. Additionally, our fully derivable depth-based model allows us to efficiently incorporate a discriminator in an adversarial fashion to improve the accuracy and "humanness" of the 3D output. We train and quantitatively validate our approach on SURREAL and on 3D-HUMANS, a new photorealistic dataset made of semi-synthetic in-house videos annotated with 3D ground truth surfaces.
3D human body shape from images. Most existing methods for body shape estimation from single images rely on a parametric model of the human body whose pose and shape parameters are optimized to match image evidence @cite_5 @cite_21 @cite_18 @cite_35 . This optimization process is usually initialised with an estimate of the human pose supplied by the user @cite_21 or automatically obtained through a detector @cite_5 @cite_18 @cite_35 or inertial sensors @cite_9 . Instead of optimizing mesh and skeleton parameters, recent approaches proposed to train neural networks that directly predict 3D shape and skeleton configurations given a monocular RGB video @cite_37 , multiple silhouettes @cite_28 or a single image @cite_7 @cite_10 @cite_38 . Recently, BodyNet @cite_2 was proposed to infer the volumetric body shape through the generation of likelihoods on the 3D occupancy grid of a person from a single image.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_37", "@cite_38", "@cite_7", "@cite_28", "@cite_9", "@cite_21", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "2545173102", "2793768642", "2792747672", "2891377836" ], "abstract": [ "We describe a solution to the challenging problem of estimating human body shape from a single photograph or painting. Our approach computes shape and pose parameters of a 3D human body model directly from monocular image cues and advances the state of the art in several directions. First, given a user-supplied estimate of the subject's height and a few clicked points on the body we estimate an initial 3D articulated body pose and shape. Second, using this initial guess we generate a tri-map of regions inside, outside and on the boundary of the human, which is used to segment the image using graph cuts. Third, we learn a low-dimensional linear model of human shape in which variations due to height are concentrated along a single dimension, enabling height-constrained estimation of body shape. Fourth, we formulate the problem of parametric human shape from shading. We estimate the body pose, shape and reflectance as well as the scene lighting that produces a synthesized body that robustly matches the image evidence. Quantitative experiments demonstrate how smooth shading provides powerful constraints on human shape. We further demonstrate a novel application in which we extract 3D human models from archival photographs and paintings.", "This paper describes a method to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline to infer 3D model shapes including clothed people with 4.5mm reconstruction accuracy. At the core of our approach is the transformation of dynamic body pose into a canonical frame of reference. Our main contribution is a method to transform the silhouette cones corresponding to dynamic human silhouettes to obtain a visual hull in a common reference frame. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. Results on 4 different datasets demonstrate the effectiveness of our approach to produce accurate 3D models. Requiring only an RGB camera, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.", "We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D poses of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our Localization-Classification-Regression architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests candidate poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our method recovers full-body 2D and 3D poses, hallucinating plausible body parts when the persons are partially occluded or truncated by the image boundary. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark.", "We present a feed-forward, multitask, end-to-end trainable system for the integrated 2d localization, as well as 3d pose and shape estimation, of multiple people in monocular images. The challenge is the formal modeling of the problem that intrinsically requires discrete and continuous computation (e.g. grouping people vs. predicting 3d pose). The model identifies human body structures (joints and limbs) in images, groups them based on 2d and 3d information fused using learned scoring functions, and optimally aggregates such responses into partial or complete 3d human skeleton hypotheses under kinematic tree constraints, but without knowing in advance the number of people in the scene and their visibility relations. We design a single multi-task deep neural network with differentiable stages where the person grouping problem is formulated as an integer program based on learned body part scores parameterized by both 2d and 3d information. This avoids suboptimality resulting from separate 2d and 3d reasoning, with grouping performed based on the combined information. The calculation can be formally described as a linear binary integer program with globally optimal solution. The final predictive stage of 3d pose and shape is based on a learned attention process where information from different human body parts is optimally fused. State-of-the-art results are obtained in large scale datasets like Human3.6M and Panoptic." ] }
1908.00439
2965874802
In this paper, we tackle the problem of 3D human shape estimation from single RGB images. While the recent progress in convolutional neural networks has allowed impressive results for 3D human pose estimation, estimating the full 3D shape of a person is still an open issue. Model-based approaches can output precise meshes of naked under-cloth human bodies but fail to estimate details and un-modelled elements such as hair or clothing. On the other hand, non-parametric volumetric approaches can potentially estimate complete shapes but, in practice, they are limited by the resolution of the output grid and cannot produce detailed estimates. In this work, we propose a non-parametric approach that employs a double depth map to represent the 3D shape of a person: a visible depth map and a "hidden" depth map are estimated and combined, to reconstruct the human 3D shape as done with a "mould". This representation through 2D depth maps allows a higher resolution output with a much lower dimension than voxel-based volumetric representations. Additionally, our fully derivable depth-based model allows us to efficiently incorporate a discriminator in an adversarial fashion to improve the accuracy and "humanness" of the 3D output. We train and quantitatively validate our approach on SURREAL and on 3D-HUMANS, a new photorealistic dataset made of semi-synthetic in-house videos annotated with 3D ground truth surfaces.
3D human datasets. Current approaches for human 3D pose estimation are built on deep architectures trained and evaluated on large datasets acquired in controlled environments with Motion Capture systems @cite_45 @cite_44 @cite_14 . However, while the typology of human poses on these datasets captures the space of human motions very well, the visual appearance of the corresponding images is not representative of the scenarios one may find in unconstrained real-world images. There has been a recent effort to generate in-the-wild data with ground truth pose annotation @cite_6 @cite_39 . All these datasets provide accurate 3D annotation for a small set of body keypoints and ignore 3D surface with the exception of @cite_18 and @cite_42 who annotate the SMPL parameters in real-world images manually or using IMU. Although the resulting dataset can be employed to evaluate under-cloth 3D body shapes, its annotations are not detailed enough, and importantly, its size is not sufficient to train deep networks.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_42", "@cite_6", "@cite_39", "@cite_44", "@cite_45" ], "mid": [ "2467838519", "2798646183", "2101032778", "2797184202" ], "abstract": [ "This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D Motion Capture (MoCap) data. Given a candidate 3D pose our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms the state of the art in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for in-the-wild images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images.", "Our ability to train end-to-end systems for 3D human pose estimation from single images is currently constrained by the limited availability of 3D annotations for natural images. Most datasets are captured using Motion Capture (MoCap) systems in a studio setting and it is difficult to reach the variability of 2D human pose datasets, like MPII or LSP. To alleviate the need for accurate 3D ground truth, we propose to use a weaker supervision signal provided by the ordinal depths of human joints. This information can be acquired by human annotators for a wide range of images and poses. We showcase the effectiveness and flexibility of training Convolutional Networks (ConvNets) with these ordinal relations in different settings, always achieving competitive performance with ConvNets trained with accurate 3D joint coordinates. Additionally, to demonstrate the potential of the approach, we augment the popular LSP and MPII datasets with ordinal depth annotations. This extension allows us to present quantitative and qualitative evaluation in non-studio conditions. Simultaneously, these ordinal annotations can be easily incorporated in the training procedure of typical ConvNets for 3D human pose. Through this inclusion we achieve new state-of-the-art performance for the relevant benchmarks and validate the effectiveness of ordinal depth supervision for 3D human pose.", "We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m .", "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
We review the most relevant literature on uncertainty visualization of scalar fields, with a focus on graph-based topological descriptors and topological features. Our work is primarily concerned with structure-wise data uncertainty of scalar fields; for vector and tensor fields, see an overview in @cite_31 .
{ "cite_N": [ "@cite_31" ], "mid": [ "2031619085", "1584564047", "1604082823", "2095233635" ], "abstract": [ "In uncertain scalar fields where data values vary with a certain probability, the strength of this variability indicates the confidence in the data. It does not, however, allow inferring on the effect of uncertainty on differential quantities such as the gradient, which depend on the variability of the rate of change of the data. Analyzing the variability of gradients is nonetheless more complicated, since, unlike scalars, gradients vary in both strength and direction. This requires initially the mathematical derivation of their respective value ranges, and then the development of effective analysis techniques for these ranges. This paper takes a first step into this direction: Based on the stochastic modeling of uncertainty via multivariate random variables, we start by deriving uncertainty parameters, such as the mean and the covariance matrix, for gradients in uncertain discrete scalar fields. We do not make any assumption about the distribution of the random variables. Then, for the first time to our best knowledge, we develop a mathematical framework for computing confidence intervals for both the gradient orientation and the strength of the derivative in any prescribed direction, for instance, the mean gradient direction. While this framework generalizes to 3D uncertain scalar fields, we concentrate on the visualization of the resulting intervals in 2D fields. We propose a novel color diffusion scheme to visualize both the absolute variability of the derivative strength and its magnitude relative to the mean values. A special family of circular glyphs is introduced to convey the uncertainty in gradient orientation. For a number of synthetic and real-world data sets, we demonstrate the use of our approach for analyzing the stability of certain features in uncertain 2D scalar fields, with respect to both local derivatives and feature orientation.", "This paper describes an effort to create new visualizations by exploiting hierarchical scalar topology. First, we build a hierarchical topology through synchronously constructing and simplifying Contour Tree (CT) and Morse-Smale (MS) complex of scalar fields. We then introduce three algorithms based on the hierarchical topology: (1) topology-based multi-resolution contouring — an overview provided for a scalar field by extracting iso-values from the simplified CT and tracing approximate contours across the MS complex cells; (2) topology based spaghetti plots for uncertainty — a seeding scheme based on the hierarchical topology for visualizing uncertainty among ensemble scalar data; (3) virtual ribbons — a new scheme for visualizing multivariate data invented by overlapping visual ribbons which encode the scalar variation of a region covered by uniform contours. We compare the new approaches with current alternatives.", "This paper introduces a novel, non-local characterization of critical points and their global relation in 2D uncertain scalar fields. The characterization is based on the analysis of the support of the probability density functions (PDF) of the input data. Given two scalar fields representing reliable estimations of the bounds of this support, our strategy identifies mandatory critical points: spatial regions and function ranges where critical points have to occur in any realization of the input. The algorithm provides a global pairing scheme for mandatory critical points which is used to construct mandatory join and split trees. These trees enable a visual exploration of the common topological structure of all possible realizations of the uncertain data. To allow multi-scale visualization, we introduce a simplification scheme for mandatory critical point pairs revealing the most dominant features. Our technique is purely combinatorial and handles parametric distribution models and ensemble data. It does not depend on any computational parameter and does not suffer from numerical inaccuracy or global inconsistency. The algorithm exploits ideas of the established join split tree computation. It is therefore simple to implement, and its complexity is output-sensitive. We illustrate, evaluate, and verify our method on synthetic and real-world data.", "Uncertainty is ubiquitous in science, engineering and medicine. Drawing conclusions from uncertain data is the normal case, not an exception. While the field of statistical graphics is well established, only a few 2D and 3D visualization and feature extraction methods have been devised that consider uncertainty. We present mathematical formulations for uncertain equivalents of isocontours based on standard probability theory and statistics and employ them in interactive visualization methods. As input data, we consider discretized uncertain scalar fields and model these as random fields. To create a continuous representation suitable for visualization we introduce interpolated probability density functions. Furthermore, we introduce numerical condition as a general means in feature-based visualization. The condition number-which potentially diverges in the isocontour problem-describes how errors in the input data are amplified in feature computation. We show how the average numerical condition of isocontours aids the selection of thresholds that correspond to robust isocontours. Additionally, we introduce the isocontour density and the level crossing probability field; these two measures for the spatial distribution of uncertain isocontours are directly based on the probabilistic model of the input data. Finally, we adapt interactive visualization methods to evaluate and display these measures and apply them to 2D and 3D data sets." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
Graph-based topological descriptors. Graph-based topological descriptors include merge trees @cite_20 (also known as barrier trees @cite_37 or join trees @cite_17 ), contour trees @cite_17 , Reeb graphs @cite_16 , mapper graphs @cite_12 , and joint contour nets @cite_27 . These descriptors are graph-based representations to illustrate how the topology of level sets or sublevel sets of scalar fields changes with a scalar value parameter.
{ "cite_N": [ "@cite_37", "@cite_27", "@cite_16", "@cite_20", "@cite_12", "@cite_17" ], "mid": [ "2623510711", "1584564047", "2789805612", "2029613090" ], "abstract": [ "In this paper, we propose a novel appearance-based approach for topological mapping based on a hierarchical decomposition of the environment. In our map, images with similar visual properties are grouped together in nodes, which are represented by means of an average global descriptor and an index of binary features based on a bag-of-words online approach. Each image is represented by means of a global descriptor and a set of local features, and this information is used in a two-level loop closure approach, where first global descriptors are employed to obtain the most likely nodes of the map and then binary image features are used to retrieve the most likely images inside these nodes. This hierarchical scheme enables us to reduce the search space when recognizing places, maintaining high accuracy when creating a map. Our approach is validated using several public datasets and compared against several state-of-the-art techniques. The accuracy and the sparsity of the generated maps are also discussed.", "This paper describes an effort to create new visualizations by exploiting hierarchical scalar topology. First, we build a hierarchical topology through synchronously constructing and simplifying Contour Tree (CT) and Morse-Smale (MS) complex of scalar fields. We then introduce three algorithms based on the hierarchical topology: (1) topology-based multi-resolution contouring — an overview provided for a scalar field by extracting iso-values from the simplified CT and tracing approximate contours across the MS complex cells; (2) topology based spaghetti plots for uncertainty — a seeding scheme based on the hierarchical topology for visualizing uncertainty among ensemble scalar data; (3) virtual ribbons — a new scheme for visualizing multivariate data invented by overlapping visual ribbons which encode the scalar variation of a region covered by uniform contours. We compare the new approaches with current alternatives.", "We introduce a novel RGB-D patch descriptor designed for detecting coplanar surfaces in SLAM reconstruction. The core of our method is a deep convolutional neural net that takes in RGB, depth, and normal information of a planar patch in an image and outputs a descriptor that can be used to find coplanar patches from other images.We train the network on 10 million triplets of coplanar and non-coplanar patches, and evaluate on a new coplanarity benchmark created from commodity RGB-D scans. Experiments show that our learned descriptor outperforms alternatives extended for this new task by a significant margin. In addition, we demonstrate the benefits of coplanarity matching in a robust RGBD reconstruction formulation.We find that coplanarity constraints detected with our method are sufficient to get reconstruction results comparable to state-of-the-art frameworks on most scenes, but outperform other methods on standard benchmarks when combined with a simple keypoint method.", "Most of the existing appearance based topological mapping algorithms produce dense topological maps in which each image stands as a node in the topological graph. Sparser maps can be built by representing groups of visually similar images as nodes of a topological graph. In this paper, we present a sparse topological mapping framework which uses Image Sequence Partitioning (ISP) techniques to group visually similar images as topological graph nodes. We present four different ISP techniques and evaluate their performance. In order to take advantage of the afore mentioned maps, we make use of Hierarchical Inverted Files (HIF) which enable efficient hierarchical loop closure. Outdoor experimental results demonstrating the sparsity, efficiency and accuracy achieved by the combination of ISP and HIF in performing loop closure are presented." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
Since a contour tree of a function @math can be constructed by carefully combining the merge trees of @math and @math in linear time @cite_17 , merge tree visualization shares the same design space as that of a contour tree. Contour trees are often visualized with node-link diagrams in two or three dimensions @cite_43 @cite_17 @cite_8 @cite_79 . Such diagrams are simple and powerful tools for abstract data representations @cite_24 , contour extraction @cite_60 , and data explorations in various application domains @cite_79 @cite_59 . Many attempts have been made in contour tree visualization to overcome difficulties in visual interpretation, visual clutter, and missing topological features @cite_71 .
{ "cite_N": [ "@cite_8", "@cite_60", "@cite_24", "@cite_43", "@cite_79", "@cite_59", "@cite_71", "@cite_17" ], "mid": [ "2160747282", "68571785", "2001088180", "2036022226" ], "abstract": [ "This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to preprocess the domain mesh to allow optimal computation of isosurfaces with minimal overhead storage. The Contour Tree can also be used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1.The first part of the paper presents a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in [3] with the Betti number computation without increasing its complexity. Thus, we improve on the time complexity from our previous approach [10] from O(m log m) to O(n log n + m), where m is the number of tetrahedra and n is the number of vertices in the domain of F.The second part of the paper introduces a new divide-and-conquer algorithm that computes the Augmented Contour Tree with improved efficiency. The central part of the scheme computes the output Contour Tree by merging two intermediate Contour Trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. For the first time we can compute the Contour Tree in linear time in many practical cases when t = O(n1 - e).Lastly, we report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.", "Contour trees can represent the topology of large volume data sets in a relatively compact, discrete data structure. However, the resulting trees often contain many thousands of nodes; thus, many graph drawing techniques fail to produce satisfactory results. Therefore, several visualization methods were proposed recently for the visualization of contour trees. Unfortunately, none of these techniques is able to handle uncertain contour trees although any uncertainty of the volume data inevitably results in partially uncertain contour trees. In this work, we visualize uncertain contour trees by combining the contour trees of two morphologically filtered versions of a volume data set, which represent the range of uncertainty. These two contour trees are combined and visualized within a single image such that a range of potential contour trees is represented by the resulting visualization. Thus, potentially erroneous topological structures are visually distinguished from more certain structures. Moreover, topological structures can be revealed that are otherwise obscured by data errors. We present and discuss results obtained with a prototypical implementation using well-known volume data sets.", "We show that contour trees can be computed in all dimensions by a simple algorithm that merges two trees. Our algorithm extends, simplifies, and improves work of Tarasov and Vyalyi and of van", "The contour tree is a topological abstraction of a scalar field that captures evolution in level set connectivity. It is an effective representation for visual exploration and analysis of scientific data. We describe a work-efficient, output sensitive, and scalable parallel algorithm for computing the contour tree of a scalar field defined on a domain that is represented using either an unstructured mesh or a structured grid. A hybrid implementation of the algorithm using the GPU and multi-core CPU can compute the contour tree of an input containing 16 million vertices in less than ten seconds with a speedup factor of upto 13. Experiments based on an implementation in a multi-core CPU environment show near-linear speedup for large data sets." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
Mihai and Westermann @cite_55 measure the likelihood of the occurrence of critical points with respect to both the positions and types of the critical points. Specifically, when the data uncertainty is described by a Gaussian distribution, confidence intervals are derived for the gradient and the determinant and trace of the Hessian matrix in scalar field ensembles to infer confidence regions for critical points @cite_55 . @cite_75 characterize critical points and their spatial relation for 2D uncertain scalar fields, where each vertex in a regular grid is assigned a probability density function (PDF) describing its scalar value. They identify so-called mandatory critical points -- spatial regions and function ranges where critical points have to occur in any realization of the input based on the PDF.
{ "cite_N": [ "@cite_55", "@cite_75" ], "mid": [ "1988628849", "1604082823", "2049885731", "2031619085" ], "abstract": [ "Abstract In scalar fields, critical points (points with vanishing derivatives) are important indicators of the topology of iso-contours. When the data values are affected by uncertainty, the locations and types of critical points vary and can no longer be predicted accurately. In this paper, we derive, from a given uncertain scalar ensemble, measures for the likelihood of the occurrence of critical points, with respect to both the positions and types of the critical points. In an ensemble, every instance is a possible occurrence of the phenomenon represented by the scalar values. We show that, by deriving confidence intervals for the gradient and the determinant and trace of the Hessian matrix in scalar ensembles, domain points can be classified according to whether a critical point can occur at a certain location and a specific type of critical point should be expected there. When the data uncertainty can be described stochastically via Gaussian distributed random variables, we show that even probabilistic measures for these events can be deduced.", "This paper introduces a novel, non-local characterization of critical points and their global relation in 2D uncertain scalar fields. The characterization is based on the analysis of the support of the probability density functions (PDF) of the input data. Given two scalar fields representing reliable estimations of the bounds of this support, our strategy identifies mandatory critical points: spatial regions and function ranges where critical points have to occur in any realization of the input. The algorithm provides a global pairing scheme for mandatory critical points which is used to construct mandatory join and split trees. These trees enable a visual exploration of the common topological structure of all possible realizations of the uncertain data. To allow multi-scale visualization, we introduce a simplification scheme for mandatory critical point pairs revealing the most dominant features. Our technique is purely combinatorial and handles parametric distribution models and ensemble data. It does not depend on any computational parameter and does not suffer from numerical inaccuracy or global inconsistency. The algorithm exploits ideas of the established join split tree computation. It is therefore simple to implement, and its complexity is output-sensitive. We illustrate, evaluate, and verify our method on synthetic and real-world data.", "In this paper we revisit the computation and visualization of equivalents to isocontours in uncertain scalar fields. We model uncertainty by discrete random fields and, in contrast to previous methods, also take arbitrary spatial correlations into account. Starting with joint distributions of the random variables associated to the sample locations, we compute level crossing probabilities for cells of the sample grid. This corresponds to computing the probabilities that the well-known symmetry-reduced marching cubes cases occur in random field realizations. For Gaussian random fields, only marginal density functions that correspond to the vertices of the considered cell need to be integrated. We compute the integrals for each cell in the sample grid using a Monte Carlo method. The probabilistic ansatz does not suffer from degenerate cases that usually require case distinctions and solutions of ill-conditioned problems. Applications in 2D and 3D, both to synthetic and real data from ensemble simulations in climate research, illustrate the influence of spatial correlations on the spatial distribution of uncertain isocontours.", "In uncertain scalar fields where data values vary with a certain probability, the strength of this variability indicates the confidence in the data. It does not, however, allow inferring on the effect of uncertainty on differential quantities such as the gradient, which depend on the variability of the rate of change of the data. Analyzing the variability of gradients is nonetheless more complicated, since, unlike scalars, gradients vary in both strength and direction. This requires initially the mathematical derivation of their respective value ranges, and then the development of effective analysis techniques for these ranges. This paper takes a first step into this direction: Based on the stochastic modeling of uncertainty via multivariate random variables, we start by deriving uncertainty parameters, such as the mean and the covariance matrix, for gradients in uncertain discrete scalar fields. We do not make any assumption about the distribution of the random variables. Then, for the first time to our best knowledge, we develop a mathematical framework for computing confidence intervals for both the gradient orientation and the strength of the derivative in any prescribed direction, for instance, the mean gradient direction. While this framework generalizes to 3D uncertain scalar fields, we concentrate on the visualization of the resulting intervals in 2D fields. We propose a novel color diffusion scheme to visualize both the absolute variability of the derivative strength and its magnitude relative to the mean values. A special family of circular glyphs is introduced to convey the uncertainty in gradient orientation. For a number of synthetic and real-world data sets, we demonstrate the use of our approach for analyzing the stability of certain features in uncertain 2D scalar fields, with respect to both local derivatives and feature orientation." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
To visualize the effect of uncertainty on contours, envelopes within a gridded domain are extracted to indicate in which volume the contour will lie (with a certain confidence) @cite_11 @cite_26 . Uncertainty associated with a contour can also be rendered via animation @cite_33 or as a collection of points where each point is displayed from its original location along the surface normal by an amount proportional to the uncertainty at that point" @cite_41 . Positional and geometrical variations of contours are captured by variability in gradients for uncertain scalar fields @cite_5 . Positional uncertainty of contours could also be encoded by spatial correlation @cite_32 @cite_63 or numerical sensitivity @cite_38 . Building on the notions of functional boxplots and data depth, contour boxplots @cite_36 display statistical quantities analogous to the mean, median, and order statistics for ensembles of contours.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_33", "@cite_41", "@cite_36", "@cite_32", "@cite_63", "@cite_5", "@cite_11" ], "mid": [ "2095233635", "2051308568", "2109657057", "2049885731" ], "abstract": [ "Uncertainty is ubiquitous in science, engineering and medicine. Drawing conclusions from uncertain data is the normal case, not an exception. While the field of statistical graphics is well established, only a few 2D and 3D visualization and feature extraction methods have been devised that consider uncertainty. We present mathematical formulations for uncertain equivalents of isocontours based on standard probability theory and statistics and employ them in interactive visualization methods. As input data, we consider discretized uncertain scalar fields and model these as random fields. To create a continuous representation suitable for visualization we introduce interpolated probability density functions. Furthermore, we introduce numerical condition as a general means in feature-based visualization. The condition number-which potentially diverges in the isocontour problem-describes how errors in the input data are amplified in feature computation. We show how the average numerical condition of isocontours aids the selection of thresholds that correspond to robust isocontours. Additionally, we introduce the isocontour density and the level crossing probability field; these two measures for the spatial distribution of uncertain isocontours are directly based on the probabilistic model of the input data. Finally, we adapt interactive visualization methods to evaluate and display these measures and apply them to 2D and 3D data sets.", "Uncertainty is a common and crucial issue in scientific data. The exploration and analysis of three-dimensional (3D) and large two-dimensional (2D) data with uncertainty information demand an effective visualization augmented with both user interaction and relevant context. The contour tree has been exploited as an efficient data structure to guide exploratory visualization. This paper proposes an interactive visualization tool for exploring data with quantitative uncertainty representations. First, we introduce a balanced planar hierarchical contour tree layout integrated with tree view interaction, allowing users to quickly navigate between levels of detail for contours of large data. Further, uncertainty information is attached to a planar contour tree layout to avoid the visual cluttering and occlusion in viewing uncertainty in 3D data or large 2D data. For the first time, the uncertainty information is explored as a combination of the data-level uncertainty which represents the uncertainty concerning the numerical values of the data, the contour variability which quantifies the positional variation of contours, and the topology variability which reveals the topological variation of contour trees. This information provides a new insight into how the uncertainty exists with and relates to the features of the data. The experimental results show that this new visualization facilitates a quick and accurate selection of prominent contours with high or low uncertainty and variability.", "Ensembles of numerical simulations are used in a variety of applications, such as meteorology or computational solid mechanics, in order to quantify the uncertainty or possible error in a model or simulation. Deriving robust statistics and visualizing the variability of an ensemble is a challenging task and is usually accomplished through direct visualization of ensemble members or by providing aggregate representations such as an average or pointwise probabilities. In many cases, the interesting quantities in a simulation are not dense fields, but are sets of features that are often represented as thresholds on physical or derived quantities. In this paper, we introduce a generalization of boxplots, called contour boxplots, for visualization and exploration of ensembles of contours or level sets of functions. Conventional boxplots have been widely used as an exploratory or communicative tool for data analysis, and they typically show the median, mean, confidence intervals, and outliers of a population. The proposed contour boxplots are a generalization of functional boxplots, which build on the notion of data depth. Data depth approximates the extent to which a particular sample is centrally located within its density function. This produces a center-outward ordering that gives rise to the statistical quantities that are essential to boxplots. Here we present a generalization of functional data depth to contours and demonstrate methods for displaying the resulting boxplots for two-dimensional simulation data in weather forecasting and computational fluid dynamics.", "In this paper we revisit the computation and visualization of equivalents to isocontours in uncertain scalar fields. We model uncertainty by discrete random fields and, in contrast to previous methods, also take arbitrary spatial correlations into account. Starting with joint distributions of the random variables associated to the sample locations, we compute level crossing probabilities for cells of the sample grid. This corresponds to computing the probabilities that the well-known symmetry-reduced marching cubes cases occur in random field realizations. For Gaussian random fields, only marginal density functions that correspond to the vertices of the considered cell need to be integrated. We compute the integrals for each cell in the sample grid using a Monte Carlo method. The probabilistic ansatz does not suffer from degenerate cases that usually require case distinctions and solutions of ill-conditioned problems. Applications in 2D and 3D, both to synthetic and real data from ensemble simulations in climate research, illustrate the influence of spatial correlations on the spatial distribution of uncertain isocontours." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
Finally, from an algorithmic perspective, probabilistic marching cubes @cite_2 and positionally uncertain iso-contours @cite_38 study the uncertainties inherent in computing the visual representations.
{ "cite_N": [ "@cite_38", "@cite_2" ], "mid": [ "2049885731", "2949727452", "2095233635", "2730507263" ], "abstract": [ "In this paper we revisit the computation and visualization of equivalents to isocontours in uncertain scalar fields. We model uncertainty by discrete random fields and, in contrast to previous methods, also take arbitrary spatial correlations into account. Starting with joint distributions of the random variables associated to the sample locations, we compute level crossing probabilities for cells of the sample grid. This corresponds to computing the probabilities that the well-known symmetry-reduced marching cubes cases occur in random field realizations. For Gaussian random fields, only marginal density functions that correspond to the vertices of the considered cell need to be integrated. We compute the integrals for each cell in the sample grid using a Monte Carlo method. The probabilistic ansatz does not suffer from degenerate cases that usually require case distinctions and solutions of ill-conditioned problems. Applications in 2D and 3D, both to synthetic and real data from ensemble simulations in climate research, illustrate the influence of spatial correlations on the spatial distribution of uncertain isocontours.", "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer's output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.", "Uncertainty is ubiquitous in science, engineering and medicine. Drawing conclusions from uncertain data is the normal case, not an exception. While the field of statistical graphics is well established, only a few 2D and 3D visualization and feature extraction methods have been devised that consider uncertainty. We present mathematical formulations for uncertain equivalents of isocontours based on standard probability theory and statistics and employ them in interactive visualization methods. As input data, we consider discretized uncertain scalar fields and model these as random fields. To create a continuous representation suitable for visualization we introduce interpolated probability density functions. Furthermore, we introduce numerical condition as a general means in feature-based visualization. The condition number-which potentially diverges in the isocontour problem-describes how errors in the input data are amplified in feature computation. We show how the average numerical condition of isocontours aids the selection of thresholds that correspond to robust isocontours. Additionally, we introduce the isocontour density and the level crossing probability field; these two measures for the spatial distribution of uncertain isocontours are directly based on the probabilistic model of the input data. Finally, we adapt interactive visualization methods to evaluate and display these measures and apply them to 2D and 3D data sets.", "We study probabilistic models of natural images and extend the autoregressive family of PixelCNN architectures by incorporating auxiliary variables. Subsequently, we describe two new generative image models that exploit different image transformations as auxiliary variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of the proposed models, in particular showing that they produce much more realistically looking image samples than previous state-of-the-art probabilistic models." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
Uncertainty visualization of graph-based decriptors. have introduced CandidTree, which merges two trees into one and visualizes both location and subtree structural uncertainty @cite_7 . have developed an interactive visualization tool that uses contour trees as abstract data representations to explore data-level uncertainty, contour and topology variability @cite_18 . Kraus has employed grayscale morphology to visualize uncertain substructures in contour trees @cite_13 . have proposed sampling-based Monte Carlo methods to study contour trees of uncertain terrains, where uncertainty lies in the height function described by a probability distribution @cite_54 . The work most relevant to ours is @cite_18 , where a mean contour tree is computed as the contour tree of the mean of an ensemble. However, our work differs significantly from @cite_18 in that instead of computing a tree from an average of the ensemble members, we compute an average tree directly from a set of input trees (that potentially arise from ensemble members).
{ "cite_N": [ "@cite_18", "@cite_54", "@cite_13", "@cite_7" ], "mid": [ "2128347847", "2051308568", "1584564047", "2753405889" ], "abstract": [ "Most visualization systems fail to convey uncertainty within data. To provide a way to show uncertainty in similar hierarchies, we interpreted the differences between two tree structures as uncertainty. We developed a new interactive visualization system called CandidTree that merges two trees into one and visualizes two types of structural uncertainty: location and sub-tree structure uncertainty. Since CandidTree can visualize the differences between two tree structures, we conducted a series of user studies with tree-comparison tasks. First a usability study was conducted to identify major usability issues and evaluate how our system works. Another qualitative user study was conducted to see if biologists, who regularly work with hierarchically organized names, are able to use CandidTree, and to assess the 'uncertainty' metric we used. A controlled experiment with software engineers was conducted to compare CandidTree with WinDiff, a traditional files and folders comparison tool. The results showed that users performed better with CandidTree. Furthermore, CandidTree received better satisfaction ratings and all users preferred CandidTree to WinDiff.", "Uncertainty is a common and crucial issue in scientific data. The exploration and analysis of three-dimensional (3D) and large two-dimensional (2D) data with uncertainty information demand an effective visualization augmented with both user interaction and relevant context. The contour tree has been exploited as an efficient data structure to guide exploratory visualization. This paper proposes an interactive visualization tool for exploring data with quantitative uncertainty representations. First, we introduce a balanced planar hierarchical contour tree layout integrated with tree view interaction, allowing users to quickly navigate between levels of detail for contours of large data. Further, uncertainty information is attached to a planar contour tree layout to avoid the visual cluttering and occlusion in viewing uncertainty in 3D data or large 2D data. For the first time, the uncertainty information is explored as a combination of the data-level uncertainty which represents the uncertainty concerning the numerical values of the data, the contour variability which quantifies the positional variation of contours, and the topology variability which reveals the topological variation of contour trees. This information provides a new insight into how the uncertainty exists with and relates to the features of the data. The experimental results show that this new visualization facilitates a quick and accurate selection of prominent contours with high or low uncertainty and variability.", "This paper describes an effort to create new visualizations by exploiting hierarchical scalar topology. First, we build a hierarchical topology through synchronously constructing and simplifying Contour Tree (CT) and Morse-Smale (MS) complex of scalar fields. We then introduce three algorithms based on the hierarchical topology: (1) topology-based multi-resolution contouring — an overview provided for a scalar field by extracting iso-values from the simplified CT and tracing approximate contours across the MS complex cells; (2) topology based spaghetti plots for uncertainty — a seeding scheme based on the hierarchical topology for visualizing uncertainty among ensemble scalar data; (3) virtual ribbons — a new scheme for visualizing multivariate data invented by overlapping visual ribbons which encode the scalar variation of a region covered by uniform contours. We compare the new approaches with current alternatives.", "We present a novel type of circular treemap, where we intentionally allocate extra space for additional visual variables. With this extended visual design space, we encode hierarchically structured data along with their uncertainties in a combined diagram. We introduce a hierarchical and force-based circle-packing algorithm to compute Bubble Treemaps, where each node is visualized using nested contour arcs. Bubble Treemaps do not require any color or shading, which offers additional design choices. We explore uncertainty visualization as an application of our treemaps using standard error and Monte Carlo-based statistical models. To this end, we discuss how uncertainty propagates within hierarchies. Furthermore, we show the effectiveness of our visualization using three different examples: the package structure of Flare, the S&P 500 index, and the US consumer expenditure survey." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
Distances between topological structures. Recently, many metrics have been proposed for merge trees, often by way of a restriction from a metric on the more general Reeb graph @cite_20 @cite_80 @cite_1 @cite_39 @cite_34 @cite_4 @cite_50 @cite_61 @cite_64 . In this paper, we focus on the interleaving distance for labeled merge trees @cite_25 . This distance is an example of an interleaving distance between persistence modules @cite_45 , which is brought to graph-based descriptors such as merge trees @cite_80 and Reeb graphs @cite_1 via category theory @cite_21 @cite_22 .
{ "cite_N": [ "@cite_61", "@cite_64", "@cite_4", "@cite_22", "@cite_21", "@cite_1", "@cite_39", "@cite_45", "@cite_50", "@cite_80", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "2789787607", "1756475526", "1968399077", "2142420687" ], "abstract": [ "There are many metrics available to compare phylogenetic trees since this is a fundamental task in computational biology. In this paper, we focus on one such metric, the l∞-cophenetic metric introduced by This metric works by representing a phylogenetic tree with n labeled leaves as a point in ( R ^ n(n+1) 2 ) known as the cophenetic vector, then comparing the two resulting Euclidean points using the l∞ distance. Meanwhile, the interleaving distance is a formal categorical construction generalized from the definition of , originally introduced to compare persistence modules arising from the field of topological data analysis. We show that the l∞-cophenetic metric is an example of an interleaving distance. To do this, we define phylogenetic trees as a category of merge trees with some additional structure, namely, labelings on the leaves plus a requirement that morphisms respect these labels. Then we can use the definition of a flow on this category to give an interleaving distance. Finally, we show that, because of the additional structure given by the categories defined, the map sending a labeled merge tree to the cophenetic vector is, in fact, an isometric embedding, thus proving that the l∞-cophenetic metric is an interleaving distance.", "The Reeb graph is a construction that studies a topological space through the lens of a real valued function. It has been commonly used in applications, however its use on real data means that it is desirable and increasingly necessary to have methods for comparison of Reeb graphs. Recently, several metrics on the set of Reeb graphs have been proposed. In this paper, we focus on two: the functional distortion distance and the interleaving distance. The former is based on the Gromov-Hausdorff distance, while the latter utilizes the equivalence between Reeb graphs and a particular class of cosheaves. However, both are defined by constructing a near-isomorphism between the two graphs of study. In this paper, we show that the two metrics are strongly equivalent on the space of Reeb graphs. Our result also implies the bottleneck stability for persistence diagrams in terms of the Reeb graph interleaving distance.", "We study distorted metrics on binary trees in the context of phylogenetic reconstruction. Given a binary tree T on n leaves with a path metric d, consider the pairwise distances d(u,v) between leaves. It is well known that these determine the tree and the d length of all edges. Here, we consider distortions @math of d such that, for all leaves u and v, it holds that @math if either d(u,v)< M + f 2 or @math , where d satisfies f ≤ d(e) ≤ g for all edges e. Given such distortions, we show how to reconstruct in polynomial time a forest T1, ... ,Tα such that the true tree T may be obtained from that forest by adding α-1 edges and α-1 ≤ 2-Ω(M g) n. Our distorted metric result implies a reconstruction algorithm of phylogenetic forests with a small number of trees from sequences of length logarithmic in the number of species. The reconstruction algorithm is applicable for the general Markov model. Both the distorted metric result and its applications to phylogeny are almost tight.", "We introduce a novel measure called e-four-pointscondition (e-4PC), which assigns a value e ∈ [0,1] to every metric space quantifying how close the metric is to a tree metric. Data-sets taken from real Internet measurements indicate remarkable closeness of Internet latencies to tree metrics based on this condition. We study embeddings of e-4PC metric spaces into trees and prove tight upper and lower bounds. Specifically, we show that there are constants c1 and c2 such that, (1) every metric (X,d) which satisfies the e-4PC can be embedded into a tree with distortion (1+e)c1log|X|, and (2) for every e ∈: [0,1] and any number of nodes, there is a metric space (X,d) satisfying the e-4PC that does not embed into a tree with distortion less than (1+e)c2log|X|. In addition, we prove a lower bound on approximate distance labelings of e-4PC metrics, and give tight bounds for tree embeddings with additive error guarantees." ] }
1908.00113
2965823887
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
The application of the interleaving distance to labeled merge trees can also be viewed as an interpretation of a metric for phylogenetic trees @cite_68 . Our framework differs from the previous work as it relies on a clean and simple metric-space view of input trees that is equipped with geodesics, as well as easy-to-implement algorithms.
{ "cite_N": [ "@cite_68" ], "mid": [ "2789787607", "2060425093", "1968399077", "2154675536" ], "abstract": [ "There are many metrics available to compare phylogenetic trees since this is a fundamental task in computational biology. In this paper, we focus on one such metric, the l∞-cophenetic metric introduced by This metric works by representing a phylogenetic tree with n labeled leaves as a point in ( R ^ n(n+1) 2 ) known as the cophenetic vector, then comparing the two resulting Euclidean points using the l∞ distance. Meanwhile, the interleaving distance is a formal categorical construction generalized from the definition of , originally introduced to compare persistence modules arising from the field of topological data analysis. We show that the l∞-cophenetic metric is an example of an interleaving distance. To do this, we define phylogenetic trees as a category of merge trees with some additional structure, namely, labelings on the leaves plus a requirement that morphisms respect these labels. Then we can use the definition of a flow on this category to give an interleaving distance. Finally, we show that, because of the additional structure given by the categories defined, the map sending a labeled merge tree to the cophenetic vector is, in fact, an isometric embedding, thus proving that the l∞-cophenetic metric is an interleaving distance.", "Abstract A metric on general phylogenetic trees is presented. This extends the work of most previous authors, who constructed metrics for binary trees. The metric presented in this paper makes possible the comparison of the many nonbinary phylogenetic trees appearing in the literature. This provides an objective procedure for comparing the different methods for constructing phylogenetic trees. The metric is based on elementary operations which transform one tree into another. Various results obtained in applying these operations are given. They enable the distance between any pair of trees to be calculated efficiently. This generalizes previous work by Bourque to the case where interior vertices can be labeled, and labels may contain more than one element or may be empty.", "We study distorted metrics on binary trees in the context of phylogenetic reconstruction. Given a binary tree T on n leaves with a path metric d, consider the pairwise distances d(u,v) between leaves. It is well known that these determine the tree and the d length of all edges. Here, we consider distortions @math of d such that, for all leaves u and v, it holds that @math if either d(u,v)< M + f 2 or @math , where d satisfies f ≤ d(e) ≤ g for all edges e. Given such distortions, we show how to reconstruct in polynomial time a forest T1, ... ,Tα such that the true tree T may be obtained from that forest by adding α-1 edges and α-1 ≤ 2-Ω(M g) n. Our distorted metric result implies a reconstruction algorithm of phylogenetic forests with a small number of trees from sequences of length logarithmic in the number of species. The reconstruction algorithm is applicable for the general Markov model. Both the distorted metric result and its applications to phylogeny are almost tight.", "Comparing and computing distances between phylogenetic trees are important biological problems, especially for models where edge lengths play an important role. The geodesic distance measure between two phylogenetic trees with edge lengths is the length of the shortest path between them in the continuous tree space introduced by Billera, Holmes, and Vogtmann. This tree space provides a powerful tool for studying and comparing phylogenetic trees, both in exhibiting a natural distance measure and in providing a euclidean-like structure for solving optimization problems on trees. An important open problem is to find a polynomial time algorithm for finding geodesics in tree space. This paper gives such an algorithm, which starts with a simple initial path and moves through a series of successively shorter paths until the geodesic is attained." ] }
1908.00205
2944546684
Abstract The purpose of network representation is to learn a set of latent features by obtaining community information from network structures to provide knowledge for machine learning tasks. Recent research has driven significant progress in network representation by employing random walks as the network sampling strategy. Nevertheless, existing approaches rely on domain-specifically rich community structures and fail in the network that lack topological information in its own domain. In this paper, we propose a novel algorithm for cross-domain network representation, named as CDNR. By generating the random walks from a structural rich domain and transferring the knowledge on the random walks across domains, it enables a network representation for the structural scarce domain as well. To be specific, CDNR is realized by a cross-domain two-layer node-scale balance algorithm and a cross-domain two-layer knowledge transfer algorithm in the framework of cross-domain two-layer random walk learning. Experiments on various real-world datasets demonstrate the effectiveness of CDNR for universal networks in an unsupervised way.
The previously used per-node partition function @cite_26 is expensive to compute, especially for large information networks. To overcome this disadvantage, a series of sampling strategies have been proposed @cite_44 @cite_23 to analyze the statistics within local structures, e.g., communities and sub-networks. These approaches are different from traditional representation learning @cite_9 @cite_24 @cite_22 . The latent feature learning of the network representation captures neighborhood similarity and community membership in topologies @cite_8 @cite_20 @cite_34 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_9", "@cite_44", "@cite_24", "@cite_23", "@cite_34", "@cite_20" ], "mid": [ "2768104274", "1522055873", "2207622687", "2622028207" ], "abstract": [ "Learning low-dimensional representations of networks has proved effective in a variety of tasks such as node classification, link prediction and network visualization. Existing methods can effectively encode different structural properties into the representations, such as neighborhood connectivity patterns, global structural role similarities and other high-order proximities. However, except for objectives to capture network structural properties, most of them suffer from lack of additional constraints for enhancing the robustness of representations. In this paper, we aim to exploit the strengths of generative adversarial networks in capturing latent features, and investigate its contribution in learning stable and robust graph representations. Specifically, we propose an Adversarial Network Embedding (ANE) framework, which leverages the adversarial learning principle to regularize the representation learning. It consists of two components, i.e., a structure preserving component and an adversarial learning component. The former component aims to capture network structural properties, while the latter contributes to learning robust representations by matching the posterior distribution of the latent representations to given priors. As shown by the empirical results, our method is competitive with or superior to state-of-the-art approaches on benchmark network embedding tasks.", "To speed up multidimensional data analysis, database systems frequently precompute aggregates on some subsets of dimensions and their corresponding hierarchies. This improves query response time. However, the decision of what and how much to precompute is a difficult one. It is further complicated by the fact that precomputation in the presence of hierarchies can result in an unintuitively large increase in the amount of storage required by the database. Hence, it is interesting and useful to estimate the storage blowup that will result from a proposed set of precomputations without actually computing them. We propose three strategies for this problem: one based on sampling, one based on mathematical approximation, and one based on probabilistic counting. We investigate the accuracy of these algorithms in estimating the blowup for different data distributions and database schemas. The algorithm based upon probabilistic counting is particularly attractive, since it estimates the storage blowup to within provable error bounds while performing only a single scan of the data. *Work supported by an IBM CAS Fellowship, NSF grant IRI9157357, and a grant from IBM under the University Partnership Program. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying 1:s by permission of the Very Large Data Base Endowm.ent. To copy otherwise, 01‘ to republish, requires a fee and or special permission j orn the En.do?ument. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996", "Given a large network, local community detection aims at finding the community that contains a set of query nodes and also maximizes (minimizes) a goodness metric. This problem has recently drawn intense research interest. Various goodness metrics have been proposed. However, most existing metrics tend to include irrelevant subgraphs in the detected local community. We refer to such irrelevant subgraphs as free riders. We systematically study the existing goodness metrics and provide theoretical explanations on why they may cause the free rider effect. We further develop a query biased node weighting scheme to reduce the free rider effect. In particular, each node is weighted by its proximity to the query node. We define a query biased density metric to integrate the edge and node weights. The query biased densest subgraph, which has the largest query biased density, will shift to the neighborhood of the query nodes after node weighting. We then formulate the query biased densest connected subgraph (QDC) problem, study its complexity, and provide efficient algorithms to solve it. We perform extensive experiments on a variety of real and synthetic networks to evaluate the effectiveness and efficiency of the proposed methods.", "We propose a novel method called deep convolutional decision jungle (CDJ) and its learning algorithm for image classification. The CDJ maintains the structure of standard convolutional neural networks (CNNs), i.e. multiple layers of multiple response maps fully connected. Each response map-or node-in both the convolutional and fully-connected layers selectively respond to class labels s.t. each data sample travels via a specific soft route of those activated nodes. The proposed method CDJ automatically learns features, whereas decision forests and jungles require pre-defined feature sets. Compared to CNNs, the method embeds the benefits of using data-dependent discriminative functions, which better handles multi-modal heterogeneous data; further,the method offers more diverse sparse network responses, which in turn can be used for cost-effective learning classification. The network is learnt by combining conventional softmax and proposed entropy losses in each layer. The entropy loss,as used in decision tree growing, measures the purity of data activation according to the class label distribution. The back-propagation rule for the proposed loss function is derived from stochastic gradient descent (SGD) optimization of CNNs. We show that our proposed method outperforms state-of-the-art methods on three public image classification benchmarks and one face verification dataset. We also demonstrate the use of auxiliary data labels, when available, which helps our method to learn more discriminative routing and representations and leads to improved classification." ] }
1908.00205
2944546684
Abstract The purpose of network representation is to learn a set of latent features by obtaining community information from network structures to provide knowledge for machine learning tasks. Recent research has driven significant progress in network representation by employing random walks as the network sampling strategy. Nevertheless, existing approaches rely on domain-specifically rich community structures and fail in the network that lack topological information in its own domain. In this paper, we propose a novel algorithm for cross-domain network representation, named as CDNR. By generating the random walks from a structural rich domain and transferring the knowledge on the random walks across domains, it enables a network representation for the structural scarce domain as well. To be specific, CDNR is realized by a cross-domain two-layer node-scale balance algorithm and a cross-domain two-layer knowledge transfer algorithm in the framework of cross-domain two-layer random walk learning. Experiments on various real-world datasets demonstrate the effectiveness of CDNR for universal networks in an unsupervised way.
DeepWalk @cite_1 trains a neural language model on the random walks generated by the network structure. After denoting a random walk that starts from a root node, DeepWalk slides a window and maps the central node to its representation. Hierarchical Softmax factors out the probability distributions corresponding to the random walk and the representation function is updated to maximize the probability. DeepWalk has produced promising results in dealing with sparsity in scalable networks, but has relatively high computational complexity for large-scale information networks. LINE, Node2Vec and Struc2Vec are the other structure-based network representation algorithms that improve the performance of DeepWalk. LINE @cite_19 preserves both the local network structure and the global network structure by first-order proximity and second-order proximity respectively and can be applied to large-scale deep network structures that are directed, undirected, weighted and unweighted. Node2Vec @cite_18 explores the diverse neighborhoods of nodes in a biased random walk procedure by employing classic search strategies. Struc2Vec @cite_29 encodes structural similarities and generates the structural context for nodes using random walks. The above-mentioned works has contributed to network analysis by modeling a stream of short random walks.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_29", "@cite_1" ], "mid": [ "2154851992", "2579372251", "2767774008", "2761896323" ], "abstract": [ "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "DeepWalk is a typical representation learning method that learns low-dimensional representations for vertices in social networks. Similar to other network representation learning (NRL) models, it encodes the network structure into vertex representations and is learnt in unsupervised form. However, the learnt representations usually lack the ability of discrimination when applied to machine learning tasks, such as vertex classification. In this paper, we overcome this challenge by proposing a novel semi-supervised model, max-margin Deep-Walk (MMDW). MMDW is a unified NRL framework that jointly optimizes the max-margin classifier and the aimed social representation learning model. Influenced by the max-margin classifier, the learnt representations not only contain the network structure, but also have the characteristic of discrimination. The visualizations of learnt representations indicate that our model is more discriminative than unsupervised ones, and the experimental results on vertex classification demonstrate that our method achieves a significant improvement than other state-of-the-art methods. The source code can be obtained from https: github.com thunlp MMDW.", "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6 to 23.8 of @math - @math in multi-label node classification and 5 to 70.8 of @math in link prediction.", "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning." ] }
1908.00205
2944546684
Abstract The purpose of network representation is to learn a set of latent features by obtaining community information from network structures to provide knowledge for machine learning tasks. Recent research has driven significant progress in network representation by employing random walks as the network sampling strategy. Nevertheless, existing approaches rely on domain-specifically rich community structures and fail in the network that lack topological information in its own domain. In this paper, we propose a novel algorithm for cross-domain network representation, named as CDNR. By generating the random walks from a structural rich domain and transferring the knowledge on the random walks across domains, it enables a network representation for the structural scarce domain as well. To be specific, CDNR is realized by a cross-domain two-layer node-scale balance algorithm and a cross-domain two-layer knowledge transfer algorithm in the framework of cross-domain two-layer random walk learning. Experiments on various real-world datasets demonstrate the effectiveness of CDNR for universal networks in an unsupervised way.
All the previous works based on random walk to sample networks into a steam of nodes are under a common assumption of power-law distribution. The power-law distribution exists widely in real-world networks. It is a special degree distribution that follows @math , where @math is a node degree and @math is a positive constant @cite_6 . A network that follows the power-law distribution is also regarded as a scale-free network with the scale invariance property @cite_31 . The social networks, biological networks and citation networks being discussed in this paper are observed to be scale-free in nature @cite_11 . In @math - @math axes, the power-law distribution shows a linear trend on the slope ratio of @math (Figure and Figure ), which reflects that numerous edges connect small degree nodes and will not change regardless of network scale @cite_13 . It has been observed in @cite_1 that if a network follows the power-law distribution, the frequency at which a node undertakes in a short random walk will also follow the same distribution. Meanwhile, random walks in power-law distribution networks naturally gravitate towards high degree nodes @cite_42 .
{ "cite_N": [ "@cite_42", "@cite_1", "@cite_6", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "2124637492", "2080478510", "2137465421", "2025598851" ], "abstract": [ "The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.", "We find that scale-free random networks are excellently modeled by simple deterministic graphs. Our graph has a discrete degree distribution (degree is the number of connections of a vertex), which is characterized by a power law with exponent γ=1 + In 3 ln 2. Properties of this compact structure are surprisingly close to those of growing random scale-free networks with y in the most interesting region, between 2 and 3. We succeed to find exactly and numerically with high precision all main characteristics of the graph. In particular, we obtain the exact shortest-path-length distribution. For a large network ( In N>>1) the distribution tends to a Gaussian of width ∼ N centered at ∼ ln N. We show that the eigenvalue spectrum of the adjacency matrix of the graph has a power-law tail with exponent 2 + y.", "A power law degree distribution is established for a graph evolution model based on the graph class of k-trees. This k-tree-based graph process can be viewed as an idealized model that captures some characteristics of the preferential attachment and copying mechanisms that existing evolving graph processes fail to model due to technical obstacles. The result also serves as a further cautionary note reinforcing the point of view that a power law degree distribution should not be regarded as the only important characteristic of a complex network, as has been previously argued [D. Achlioptas, A. Clauset, D. Kempe, C. Moore, On the bias of traceroute sampling, or power-law degree distribution in regular graphs, in: Proceedings of the 37th ACM Symposium on Theory of Computing, STOC'05, 2005, pp. 694-703; L. Li, D. Alderson, J. Doyle, W. Willinger, Towards a theory of scale-free graphs: Definition, properties, and implications, Internet Mathematics 2 (4) (2005) 431-523; M. Mitzenmacher, The future of power law research, Internet Mathematics, 2 (4) (2005) 525-534].", "One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real-life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, that is, the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this article, and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via endpoint switches. We empirically evaluate the mixing time of this Markov chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov chain mixes quickly on these real graphs, allowing for utilization of our techniques in practice." ] }
1908.00205
2944546684
Abstract The purpose of network representation is to learn a set of latent features by obtaining community information from network structures to provide knowledge for machine learning tasks. Recent research has driven significant progress in network representation by employing random walks as the network sampling strategy. Nevertheless, existing approaches rely on domain-specifically rich community structures and fail in the network that lack topological information in its own domain. In this paper, we propose a novel algorithm for cross-domain network representation, named as CDNR. By generating the random walks from a structural rich domain and transferring the knowledge on the random walks across domains, it enables a network representation for the structural scarce domain as well. To be specific, CDNR is realized by a cross-domain two-layer node-scale balance algorithm and a cross-domain two-layer knowledge transfer algorithm in the framework of cross-domain two-layer random walk learning. Experiments on various real-world datasets demonstrate the effectiveness of CDNR for universal networks in an unsupervised way.
In this paper, we propose CDNR which employs biased random walk sampling strategies to learn network structures based on previous works. However, CDNR is different from the deep transfer learning approaches for cross-domain graph-structured data, i.e., context enhanced inductive representation @cite_12 , intrinsic geometric information transfer @cite_32 and deep inductive graph representation @cite_25 . Deep neural network-based network representation usually need to generalize a small set of base feature for deep learning, such as network statistical properties like node degree, which lost valuable information from networks. The link predictions in are therefore leveraged on the power-law distribution as well as the distance calculation between the two independent networks across domains. The network that has small distance to the target network is regarded as the source domain. The scale invariance property should theoretically ensure that power law-based CDNR is robust.
{ "cite_N": [ "@cite_25", "@cite_32", "@cite_12" ], "mid": [ "2963336383", "2887556118", "2799122863", "2963481481" ], "abstract": [ "In this article, a region-based Deep Convolutional Neural Network framework is presented for document structure learning. The contribution of this work involves efficient training of region based classifiers and effective ensembling for document image classification. A primary level of ‘inter-domain’ transfer learning is used by exporting weights from a pre-trained VGG16 architecture on the ImageNet dataset to train a document classifier on whole document images. Exploiting the nature of region based influence modelling, a secondary level of ‘intra-domain’ transfer learning is used for rapid training of deep learning models for image segments. Finally, a stacked generalization based ensembling is utilized for combining the predictions of the base deep neural network models. The proposed method achieves state-of-the-art accuracy of 92.21 on the popular RVL-CDIP document image dataset, exceeding the benchmarks set by the existing algorithms.", "Abstract Visual tracking algorithms based on structured output support vector machine (SOSVM) have demonstrated excellent performance. However, sampling methods and optimization strategies of SOSVM undesirably increase the computational overloads, which hinder real-time application of these algorithms. Moreover, due to the lack of high-dimensional features and dense training samples, SOSVM-based algorithms are unstable to deal with various challenging scenarios, such as occlusions and scale variations. Recently, visual tracking algorithms based on discriminative correlation filters (DCF), especially the combination of DCF and features from deep convolutional neural networks (CNN), have been successfully applied to visual tracking, and attains surprisingly good performance on recent benchmarks. The success is mainly attributed to two aspects: the circular correlation properties of DCF and the powerful representation capabilities of CNN features. Nevertheless, compared with SOSVM, DCF-based algorithms are restricted to simple ridge regression which has a weaker discriminative ability. In this paper, a novel circular and structural operator tracker (CSOT) is proposed for high performance visual tracking, it not only possesses the powerful discriminative capability of SOSVM but also efficiently inherits the superior computational efficiency of DCF. Based on the proposed circular and structural operators, a set of primal confidence score maps can be obtained by circular correlating feature maps with their corresponding structural correlation filters. Furthermore, an implicit interpolation is applied to convert the multi-resolution feature maps to the continuous domain and make all primal confidence score maps have the same spatial resolution. Then, we exploit an efficient ensemble post-processor based on relative entropy, which can coalesce primal confidence score maps and create an optimal confidence score map for more accurate localization. The target is localized on the peak of the optimal confidence score map. Besides, we introduce a collaborative optimization strategy to update circular and structural operators by iteratively training structural correlation filters, which significantly reduces computational complexity and improves robustness. Experimental results demonstrate that our approach achieves state-of-the-art performance in mean AUC scores of 71.5 and 69.4 on the OTB2013 and OTB2015 benchmarks respectively, and obtains a third-best expected average overlap (EAO) score of 29.8 on the VOT2017 benchmark.", "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5 on BDDS (drive-cam videos) in an unsupervised setting.", "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5 on BDDS (drive-cam videos) in an unsupervised setting." ] }
1908.00041
2964608077
Vector spherical harmonics on @math have wide applications in geophysics, quantum mechanics and astrophysics. In the representation of a tangent field, one needs to evaluate the expansion and the Fourier coefficients of vector spherical harmonics. In this paper, we develop fast algorithms (FaVeST) for vector spherical harmonic transforms for these evaluations. The forward FaVeST which evaluates the Fourier coefficients has computational steps proportional to @math for @math number of evaluation points. The adjoint FaVeST which evaluates a linear combination of vector spherical harmonics with degree up to @math for @math evaluation points is proportional to @math . Numerical examples illustrate the accuracy and efficiency of FaVeST.
In contrast, fast transforms for vector spherical harmonics receive less attention. To the best of our knowledge, there are no existing fast algorithms for the forward and adjoint vector spherical harmonic transforms. @cite_58 simply use FFTs to speed up their algorithms for solving Navier-Stokes PDEs on the unit sphere. However, their method is based on the idea that applies conventional FFTs to evaluate complex azimuthal exponential terms involved in the formulation of vector spherical harmonics. As fast Legendre transforms are not implemented, their method is not a fast transform. @cite_53 evaluate vector spherical harmonic expansions via spectral element grids, which is neither fast computation.
{ "cite_N": [ "@cite_53", "@cite_58" ], "mid": [ "2041410208", "2018743852", "2010122118", "1979887666" ], "abstract": [ "We present a Fourier continuation (FC) algorithm for the solution of the fully nonlinear compressible Navier-Stokes equations in general spatial domains. The new scheme is based on the recently introduced accelerated FC method, which enables use of highly accurate Fourier expansions as the main building block of general-domain PDE solvers. Previous FC-based PDE solvers are restricted to linear scalar equations with constant coefficients. The FC methodology presented in this text thus constitutes a significant generalization of the previous FC schemes, as it yields general-domain FC solvers for nonlinear systems of PDEs. While not restricted to periodic boundary conditions and therefore applicable to general boundary value problems on arbitrary domains, the proposed algorithm inherits many of the highly desirable properties arising from rapidly convergent Fourier expansions, including high-order convergence, essentially spectrally accurate dispersion relations, and much milder CFL constraints than those imposed by polynomial-based spectral methods-since, for example, the spectral radius of the FC first derivative grows linearly with the number of spatial discretization points. We demonstrate the accuracy and optimal parallel efficiency of the algorithm in a variety of scientific and engineering contexts relevant to fluid-dynamics and nonlinear acoustics.", "The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations on a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC 860, DELTA, and Paragon, and the nCUBE 2, but we also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional fast Fourier transforms (FFTs) and other parallel transforms.", "A group of algorithms is presented generalizing the fast Fourier transform to the case of noninteger frequencies and nonequispaced nodes on the interval @math . The schemes of this paper are based on a combination of certain analytical considerations with the classical fast Fourier transform and generalize both the forward and backward FFTs. Each of the algorithms requires @math arithmetic operations, where @math is the precision of computations and N is the number of nodes. The efficiency of the approach is illustrated by several numerical examples.", "Abstract In a wide range of applied problems of 2D and 3D imaging a continuous formulation of the problem places great emphasis on obtaining and manipulating the Fourier transform in Polar coordinates. However, the translation of continuum ideas into practical work with data sampled on a Cartesian grid is problematic. In this article we develop a fast high accuracy Polar FFT. For a given two-dimensional signal of size N × N , the proposed algorithm's complexity is O ( N 2 log N ) , just like in a Cartesian 2D-FFT. A special feature of our approach is that it involves only 1D equispaced FFT's and 1D interpolations. A central tool in our method is the pseudo-Polar FFT, an FFT where the evaluation frequencies lie in an oversampled set of nonangularly equispaced points. We describe the concept of pseudo-Polar domain, including fast forward and inverse transforms. For those interested primarily in Polar FFT's, the pseudo-Polar FFT plays the role of a halfway point—a nearly-Polar system from which conversion to Polar coordinates uses processes relying purely on 1D FFT's and interpolation operations. We describe the conversion process, and give an error analysis of it. We compare accuracy results obtained by a Cartesian-based unequally-sampled FFT method to ours, both algorithms using a small-support interpolation and no pre-compensating, and show marked advantage to the use of the pseudo-Polar initial grid." ] }
1908.00220
2964850415
To interpret the meanings of colors in visualizations of categorical information, people must determine how distinct colors correspond to different concepts. This process is easier when assignments between colors and concepts in visualizations match people's expectations, making color palettes semantically interpretable. Efforts have been underway to optimize color palette design for semantic interpretablity, but this requires having good estimates of human color-concept associations. Obtaining these data from humans is costly, which motivates the need for automated methods. We developed and evaluated a new method for automatically estimating color-concept associations in a way that strongly correlates with human ratings. Building on prior studies using Google Images, our approach operates directly on Google Image search results without the need for humans in the loop. Specifically, we evaluated several methods for extracting raw pixel content of the images in order to best estimate color-concept associations obtained from human ratings. The most effective method extracted colors using a combination of cylindrical sectors and color categories in color space. We demonstrate that our approach can accurately estimate average human color-concept associations for different fruits using only a small set of images. The approach also generalizes moderately well to more complicated recycling-related concepts of objects that can appear in any color.
Several factors are relevant when designing color palettes for visualizing categorical information. First and foremost, colors that represent different categories must appear different; @cite_6 @cite_29 @cite_5 @cite_31 . Other considerations include selecting colors that have distinct names @cite_57 , are aesthetically preferable @cite_20 , or evoke particular emotions @cite_18 . Most relevant to the present work, it is desirable to select color palettes to help people interpret the meanings of colors in visualizations @cite_40 @cite_56 @cite_19 . This can be achieved by selecting semantically resonant'' colors, which are colors that evoke particular concepts @cite_30 . It can also be achieved when only a subset of the colors are semantically resonant if conditions support people's ability to infer the other assignments @cite_40 , see .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_29", "@cite_6", "@cite_56", "@cite_57", "@cite_19", "@cite_40", "@cite_5", "@cite_31", "@cite_20" ], "mid": [ "2511469011", "1977400967", "2788435232", "2611374796" ], "abstract": [ "We present an evaluation of Colorgorical, a web-based tool for creating discriminable and aesthetically preferable categorical color palettes. Colorgorical uses iterative semi-random sampling to pick colors from CIELAB space based on user-defined discriminability and preference importances. Colors are selected by assigning each a weighted sum score that applies the user-defined importances to Perceptual Distance, Name Difference, Name Uniqueness, and Pair Preference scoring functions, which compare a potential sample to already-picked palette colors. After, a color is added to the palette by randomly sampling from the highest scoring palettes. Users can also specify hue ranges or build off their own starting palettes. This procedure differs from previous approaches that do not allow customization (e.g., pre-made ColorBrewer palettes) or do not consider visualization design constraints (e.g., Adobe Color and ACE). In a Palette Score Evaluation, we verified that each scoring function measured different color information. Experiment 1 demonstrated that slider manipulation generates palettes that are consistent with the expected balance of discriminability and aesthetic preference for 3-, 5-, and 8-color palettes, and also shows that the number of colors may change the effectiveness of pair-based discriminability and preference scores. For instance, if the Pair Preference slider were upweighted, users would judge the palettes as more preferable on average. Experiment 2 compared Colorgorical palettes to benchmark palettes (ColorBrewer, Microsoft, Tableau, Random). Colorgorical palettes are as discriminable and are at least as preferable or more preferable than the alternative palette sets. In sum, Colorgorical allows users to make customized color palettes that are, on average, as effective as current industry standards by balancing the importance of discriminability and aesthetic preference.", "We introduce an algorithm for automatic selection of semantically-resonant colors to represent data (e.g., using blue for data about \"oceans\", or pink for \"love\"). Given a set of categorical values and a target color palette, our algorithm matches each data value with a unique color. Values are mapped to colors by collecting representative images, analyzing image color distributions to determine value-color affinity scores, and choosing an optimal assignment. Our affinity score balances the probability of a color with how well it discriminates among data values. A controlled study shows that expert-chosen semantically-resonant colors improve speed on chart reading tasks compared to a standard palette, and that our algorithm selects colors that lead to similar gains. A second study verifies that our algorithm effectively selects colors across a variety of data categories.", "People interpret abstract meanings from colors, which makes color a useful perceptual feature for visual communication. This process is complicated, however, because there is seldom a one-to-one correspondence between colors and meanings. One color can be associated with many different concepts (one-to-many mapping) and many colors can be associated with the same concept (many-to-one mapping). We propose that to interpret color-coding systems, people perform assignment inference to determine how colors map onto concepts. We studied assignment inference in the domain of recycling. Participants saw images of colored but unlabeled bins and were asked to indicate which bins they would use to discard different kinds of recyclables and trash. In Experiment 1, we tested two hypotheses for how people perform assignment inference. The local assignment hypothesis predicts that people simply match objects with their most strongly associated color. The global assignment hypothesis predicts that people also account for the association strengths between all other objects and colors within the scope of the color-coding system. Participants discarded objects in bins that optimized the color-object associations of the entire set, which is consistent with the global assignment hypothesis. This sometimes resulted in discarding objects in bins whose colors were weakly associated with the object, even when there was a stronger associated option available. In Experiment 2, we tested different methods for encoding color-coding systems and found that people were better at assignment inference when color sets simultaneously maximized the association strength between assigned color-object parings while minimizing associations between unassigned pairings. Our study provides an approach for designing intuitive color-coding systems that facilitate communication through visual media such as graphs, maps, signs, and artifacts.", "Communicating the right affect, a feeling, experience or emotion, is critical in creating engaging visual communication. We carried out three studies examining how different color properties (lightness, chroma and hue) and different palette properties (combinations and distribution of colors) contribute to different affective interpretations in information visualization where the numbers of colors is typically smaller than the rich palettes used in design. Our results show how color and palette properties can be manipulated to achieve affective expressiveness even in the small sets of colors used for data encoding in information visualization." ] }
1908.00112
2966640145
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Our 2-threaded solver algorithm is inspired by the approach of @cite_19 . The better performance of our planner, besides solver technologies, seems partly due to the grounding size and search space pruning under stable model semantics as commented in Section , and partly due to less clustered encoding in ASP than in SAT, plus the smart encoding of mutex constraints. But note that their approach works only if we assume the problem is solvable (which he does) and all actions have positive (nonzero) costs (which he also does).
{ "cite_N": [ "@cite_19" ], "mid": [ "97284975", "2214177029", "2069278600", "2138385963" ], "abstract": [ "This work presents a novel strategy for improving SAT solver performance by using concurrency. Rather than aiming to parallelize search, we use concurrency to aid a conventional CDCL search procedure. More concretely, our work extends a conventional CDCL SAT solver with a second computation thread, which is solely used to strengthen the clauses learned by the solver. This provides a simple and natural way to exploit the availability of multi-core hardware. We have employed our technique to extend two well established solvers, MiniSAT and Glucose. Despite its conceptual simplicity the technique yields a significant improvement of those solvers' performances, in particular for unsatisfiable benchmarks. For such benchmarks an extensive empirical evaluation revealed a remarkably consistent reduction of the wall clock time required to determine unsatisfiability, as well as an ability to solve more benchmarks in the same CPU time. The proposed technique can be applied in combination with existing parallel SAT solving techniques, including both portfolio and search space splitting approaches. The approach presented here can thus be seen as orthogonal to those existing techniques.", "Motivated by the recent developments of nonconvex penalties in sparsity modeling, we propose a nonconvex optimization model for handing the low-rank matrix recovery problem. Different from the famous robust principal component analysis (RPCA), we suggest recovering low-rank and sparse matrices via a nonconvex loss function and a nonconvex penalty. The advantage of the nonconvex approach lies in its stronger robustness. To solve the model, we devise a majorization-minimization augmented Lagrange multiplier (MM-ALM) algorithm which finds the local optimal solutions of the proposed nonconvex model. We also provide an efficient strategy to speedup MM-ALM, which makes the running time comparable with the state-of-the-art algorithm of solving RPCA. Finally, empirical results demonstrate the superiority of our nonconvex approach over RPCA in terms of matrix recovery accuracy.", "We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n sqrt log n ), improving on the previous best of O(n^ 1.5 ) due to Lag arias, Lenstra, and Schnorr hkzbabai . We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(sqrt n log n ) due to Aharonov and Regev AharonovR04 . To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio LiuLM06 gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+eps)gamma approximate CVP to gamma approximate CVP where the target is at distance at most 1+1 eps times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun DadushK13 . As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice MR04 . We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Omega (sqrt n ) factor larger.", "Several message passing-based parallel solvers have been developed for general (non-symmetric) sparse LU factorization with partial pivoting. Existing solvers were mostly deployed and evaluated on parallel computing platforms with high message passing performance (e.g., 1-10 µs in message latency and 100-1000Mbytes s in message throughput) while little attention has been paid on slower platforms. This paper investigates techniques that are specifically beneficial for LU factorizafion on platforms with slow message passing. In the context of the S+ distributed memory solver, we find that significant reduction in the application message passing overhead can be attained at the cost of extra computation and slightly weakened numerical stability. In particular, we propose batch pivoting to make pivot selections in groups through speculative factorization, and thus substantially decrease the inter-processor synchronization granularity. We experimented on three different message passing platforms with different communication speeds. While the proposed techniques provide no performance benefit and even slightly weaken numerical stability on an IBM Regatta multiprocessor with fast message passing, they improve the performance of our test matrices by 15-460 on an Ethernet-connected 16-node PC cluster. Given the different tradeoffs of communication-reduction techniques on different message passing platforms, we also propose a sampling-based runtime application adaptation approach that automatically determines whether these techniques should be employed for a given platform and input matrix." ] }
1908.00112
2966640145
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Delete-free planning has been investigated as a stand-alone topic, including a CP solution @cite_20 . To the best of our knowledge, our modeling of delete-free planning as a graph problem is original, and it leads to a five-line ASP program which does everything (cf. Appendix C).
{ "cite_N": [ "@cite_20" ], "mid": [ "2025460523", "121593559", "2293052688", "2133067819" ], "abstract": [ "Abstract We introduce a new approach to planning in STRIPS-like domains based on constructing and analyzing a compact structure we call a planning graph. We describe a new planner, Graphplan, that uses this paradigm. Graphplan always returns a shortest possible partial-order plan, or states that no valid plan exists. We provide empirical evidence in favor of this approach, showing that Graphplan outperforms the total-order planner, Prodigy and the partial-order planner, UCPOP, on a variety of interesting natural and artificial planning problems. We also give empirical evidence that the plans produced by Graphplan are quite sensible. Since searches made by this approach are fundamentally different from the searches of other common planning methods, they provide a new perspective on the planning problem.", "We examine the approach of encoding planning problems as CSPs more closely. First we present a simple CSP encoding for planning problems and then a set of transformations that can be used to eliminate variables and add new constraints to the encoding. We show that our transformations uncover additional structure in the planning problem, structure that subsumes the structure uncovered by GRAPHPLAN planning graphs. We solve the CSP encoded planning problem by using standard CSP algorithms. Empirical evidence is presented to validate the effectiveness of this approach to solving planning problems, and to show that even a prototype implementation is more effective than standard GRAPHPLAN. Our prototype is even competitive with far more optimized planning graph based implementations. We also demonstrate that this approach can be more easily lifted to more complex types of planning than can planning graphs. In particular, we show that the approach can be easily extended to planning with resources.", "In this paper, we develop an online motion planning approach which learns from its planning episodes (experiences) a graph, an Experience Graph. On the theoretical side, we show that planning with Experience graphs is complete and provides bounds on suboptimality with respect to the graph that represents the original planning problem. Experimentally, we show in simulations and on a physical robot that our approach is particularly suitable for higher-dimensional motion planning tasks such as planning for two armed mobile manipulation. Many mundane manipulation tasks such as picking and placing various objects in a kitchen are highly repetitive. It is expected that robots should be capable of learning and improving their performance with every execution of these repetitive tasks. This work focuses on learning from experience for motion planning. Our approach relies on a graphsearch method for planning that builds an Experience Graph online to represent the high-level connectivity of the free space used for the encountered planning tasks. The planner uses the Experience graph to accelerate its planning whenever possible and gracefully degenerates to planning from scratch if no previous planning experiences can be reused. Planning with Experience graphs is complete and it provides bounds on suboptimality with respect to the graph that represents the original planning problem. Related work in (Jiang and Kallmann 2007) takes a database of motion plans and uses an RRT to draw the search towards a similar path to the new query. Our approach may use parts of many prior paths (not just one) and provides bounds on solution quality, unlike the above work. We provide results showing Experience Graphs can significantly improve the performance of a high-dimensional full-body planner for the PR2 robot. For more details refer to the full paper ( 2012).", "Describes experiments with a probabilistic roadmap planner (PRM) on a spot-welding station with 2 to 6 robot manipulators combining 12 to 36 degrees of freedom. When performing centralized planning, the planner has proven to be reliable and fast. When performing decoupled planning, it was not significantly faster, but it was much less reliable, failing to find a solution 30 to 75 of the times in 6-robot examples. This is an important result as it invalidates the assumption that the loss of completeness in performing decoupled planning is not very substantial in practice and indicates that centralized planning is a more desirable approach-at least in applications like spot-welding, which requires rather tight robot coordination." ] }
1908.00112
2966640145
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
The Madagascar Planner is a family of efficient implementations of the SAT based techniques for planning. The main idea is, instead of using the standard decision heuristics such as VSIDS, planning-specific variable selection heuristics are applied @cite_7 . One would expect that the same idea can work for ASP-based planning, and in this case, our 2-threaded cost-optimal planner can benefit from it directly.
{ "cite_N": [ "@cite_7" ], "mid": [ "2048644716", "2152383085", "2774807875", "2122054842" ], "abstract": [ "Reduction to SAT is a very successful approach to solving hard combinatorial problems in Artificial Intelligence and computer science in general. Most commonly, problem instances reduced to SAT are solved with a general-purpose SAT solver. Although there is the obvious possibility of improving the SAT solving process with application-specific heuristics, this has rarely been done successfully. In this work we propose a planning-specific variable selection strategy for SAT solving. The strategy is based on generic principles about properties of plans, and its performance with standard planning benchmarks often substantially improves on generic variable selection heuristics, such as VSIDS, and often lifts it to the same level with other search methods such as explicit state-space search with heuristic search algorithms.", "We consider the problem of computing optimal plans for propositional planning problems with action costs. In the spirit of leveraging advances in general-purpose automated reasoning for that setting, we develop an approach that operates by solving a sequence of partial weighted MaxSAT problems, each of which corresponds to a step-bounded variant of the problem at hand. Our approach is the first SAT-based system in which a proof of cost optimality is obtained using a MaxSAT procedure. It is also the first system of this kind to incorporate an admissible planning heuristic. We perform a detailed empirical evaluation of our work using benchmarks from a number of International Planning Competitions.", "Consider a setting where robots must visit sites represented as nodes in a graph, but each robot may fail when traversing an edge. The goal is to find a set of paths for a team of robots which maximizes the expected number of nodes collectively visited, while guaranteeing that the paths satisfy a notion of “independence” formalized by a matroid (e.g. limits on team size, number of visits to regions), and that the probabilities that each robot survives to its destination are above a given threshold. We call this problem the Matroid Team Surviving Orienteers (MTSO) problem, which has broad applications such as environmental monitoring in risky regions and search and rescue in dangerous conditions. We present the MTSO formally and detail numerous examples of matroids in a path planning context. We then propose an approximate greedy algorithm for selecting a feasible set of paths and prove that the value of the output is within a factor p s p s + λ of the optimum, where p s is the per-robot survival probability threshold and 1 λ ≤ 1 is the approximation factor of an oracle routine for the well known orienteering problem. We demonstrate the efficiency of our approach by applying it to a scenario where a team of robots must gather information while avoiding pirates in the Coral Triangle.", "In the AIPS98 Planning Contest, the HSP planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and SAT planners. Heuristic search planners like HSP transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik’s Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.  2001 Elsevier Science B.V. All rights reserved." ] }
1908.00112
2966640145
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Cost-optimal planners can also be built on the @math platform. An @math planner based on greedy selection of a subset of heuristics for guiding @math search @cite_13 has made to the top tier in IPC-2018. @math planning can be encoded in SAT and ASP as well, but the most critical component, the selection algorithm, needs to be implemented by an external program.
{ "cite_N": [ "@cite_13" ], "mid": [ "2152383085", "2122054842", "1585557683", "1633032608" ], "abstract": [ "We consider the problem of computing optimal plans for propositional planning problems with action costs. In the spirit of leveraging advances in general-purpose automated reasoning for that setting, we develop an approach that operates by solving a sequence of partial weighted MaxSAT problems, each of which corresponds to a step-bounded variant of the problem at hand. Our approach is the first SAT-based system in which a proof of cost optimality is obtained using a MaxSAT procedure. It is also the first system of this kind to incorporate an admissible planning heuristic. We perform a detailed empirical evaluation of our work using benchmarks from a number of International Planning Competitions.", "In the AIPS98 Planning Contest, the HSP planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and SAT planners. Heuristic search planners like HSP transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik’s Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.  2001 Elsevier Science B.V. All rights reserved.", "Considerable effort has been spent extending the scope of planning beyond propositional domains to include, for example, time and numbers. Each extension has been designed as a separate specific semantic enrichment of the underlying planning model, with its own syntax and customised integration into a planning algorithm. Inspired by work on SAT Modulo Theories (SMT) in the SAT community, we develop a modelling language and planner that treat arbitrary first order theories as parameters. We call the approach Planning Modulo Theories (PMT). We introduce a modular language to represent PMT problems and demonstrate its benefits over PDDL in expressivity and compactness. We present a generalisation of the hmax heuristic that allows our planner, PMTPlan, to automatically reason about arbitrary theories added as modules. Over several new and existing benchmarks, exploiting different theories, we show that PMTPlan can significantly out-perform an existing planner using PDDL models.", "Recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language κc, which extends the declarative planning language κ by action costs. κc provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp. minimum over all plans (i.e., cheapest plans). As we demonstrate, this novel language allows for expressing some nontrivial planning tasks in a declarative way. Furthermore, it can be utilized for representing planning problems under other optimality criteria, such as computing \"shortest\" plans (with the least number of steps), and refinement combinations of cheapest and fastest plans. We study complexity aspects of the language κc and provide a transformation to logic programs, such that planning problems are solved via answer set programming. Furthermore, we report experimental results on selected problems. Our experience is encouraging that answer set planning may be a valuable approach to expressive planning systems in which intricate planning problems can be naturally specified and solved." ] }
1908.00112
2966640145
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Stepless planning is a brand new approach to logic-based planning and brings with it a lot of unknowns and potentials for future directions. One issue is that the lack of any notion of simultaneity makes certain standard optimizations difficult, such as incorporating mutex constraints and supporting conditional-effects (an extension to STRIPS planning). The latter extension has been realized in SAT-based planning @cite_1 , but incorporating it to stepless planning appears to be non-trivial. Our stepless planner is a nontrivial application that requires supportedness cycles to extend across different program sections, it would be nice if supported iterative solving with this.
{ "cite_N": [ "@cite_1" ], "mid": [ "2403583150", "1600919542", "2135939055", "160902222" ], "abstract": [ "Path planning is often a high-dimensional computationally-expensive planning problem as it requires reasoning about the kinodynamic constraints of the robot and collisions of the robot with the environment. However, large regions of the environment are typically benign enough that a much faster low-dimensional planning combined with a local path following controller suffice. Planning with Adaptive Dimensionality that was recently developed makes use of this observation and iteratively constructs and searches a state-space consisting of mainly low-dimensional states. It only introduces regions of high-dimensional states into the state-space where they are necessary to ensure completeness and bounds on sub-optimality. However, due to its iterative nature, the approach relies on running a series of weighted A* searches. In this paper, we introduce and apply to Planning with Adaptive Dimensionality a simple but very effective incremental version of weighted A* that reuses its previously generated search tree if available. On the theoretical side, the new algorithm preserves guarantees on completeness and bounds on sub-optimality. On the experimental side, it speeds up 3D (x,y,heading) path planning with a full-body collision checking by up to a factor of 5. Our results also show that it tends to be much faster than applying alternative incremental graph search techniques such as D* to Planning with Adaptive Dimensionality.", "Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.", "Multi-robot path planning is dificult due to the combinatorial explosion of the search space with every new robot added. Complete search of the combined state-space soon becomes intractable. In this paper we present a novel form of abstraction that allows us to plan much more eficiently. The key to this abstraction is the partitioning of the map into subgraphs of known structure with entry and exit restrictions which we can represent compactly. Planning then becomes a search in the much smaller space of subgraph configurations. Once an abstract plan is found, it can be quickly resolved into a correct (but possibly sub-optimal) concrete plan without the need for further search. We prove that this technique is sound and complete and demonstrate its practical effiectiveness on a real map. A contending solution, prioritised planning, is also evaluated and shown to have similar performance albeit at the cost of completeness. The two approaches are not necessarily conflicting; we demonstrate how they can be combined into a single algorithm which out-performs either approach alone.", "Recent new planning paradigms, such as Graphplan and Satplan, have been shown to outperform more traditional domain-independent planners. An interesting aspect of these planners is that they do not incorporate domain specific control knowledge, but instead rely on efficient graph-based or propositional representations and advanced search techniques. An alternative approach has been proposed in the TLPlan system. TLPlan is an example of a powerful planner incorporating declarative control specified in temporal logic formulas. We show how these control rules can be parsed into Satplan. Our empirical results show up to an order of magnitude speed up. We also provide a detailed comparison with TLPlan, and show how the search strategies in TLPlan lead to efficient plans in terms of the number of actions but with little or no parallelism. The Satplan and Graphplan formalisms on the other hand do find highly parallel plans, but are less effective in sequential domains. Our results enhance our understanding of the various tradeoffs in planning technology, and extend earlier work on control knowledge in the Satplan framework by (1997) and Kautz and Selman (1998)." ] }
1908.00112
2966640145
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Finally, property directed reachability (PDR), a promising method for deciding reachability in symbolically represented transition systems, which was originally conceived as a model checking algorithm for hardware circuits, has recently been related to planning @cite_0 . The relationship with our stepless planner deserves a further study; in particular, an interesting question is whether and how PDR-based planners can be strengthened to become cost-optimal.
{ "cite_N": [ "@cite_0" ], "mid": [ "1540013686", "2136340918", "1503128752", "2138198492" ], "abstract": [ "Property Directed Reachability (PDR) is a very promising recent method for deciding reachability in symbolically represented transition systems. While originally conceived as a model checking algorithm for hardware circuits, it has already been successfully applied in several other areas. This paper is the first investigation of PDR from the perspective of automated planning. Similarly to the planning as satisfiability paradigm, PDR draws its strength from internally employing an efficient SAT-solver. We show that most standard encoding schemes of planning into SAT can be directly used to turn PDR into a planning algorithm. As a non-obvious alternative, we propose to replace the SAT-solver inside PDR by a planning-specific procedure implementing the same interface. This SAT-solver free variant is not only more efficient, but offers additional insights and opportunities for further improvements. An experimental comparison to the state of the art planners finds it highly competitive, solving most problems on several domains.", "We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users.", "We consider verification of safety properties for concurrent real-timed systems modelled as timed Petri nets by performing symbolic forward reachability analysis. We introduce a formalism, called region generators, for representing sets of markings of timed Petri nets. Region generators characterize downward closed sets of regions and provide exact abstractions of sets of reachable states with respect to safety properties. We show that the standard operations needed for performing symbolic reachability analysis are computable for region generators. Since forward reachability analysis is necessarily incomplete, we introduce an acceleration technique to make the procedure terminate more often on practical examples. We have implemented a prototype for analyzing timed Petri nets and used it to verify a parameterized version of Fischer's protocol, Lynch and Shavit's mutual exclusion protocol and a producer-consumer protocol. We also used the tool to extract finite-state abstractions of these protocols.", "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract energy-efficient transportation patterns (green knowledge), which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors. However, extracting green knowledge from location traces is not a trivial task. Conventional data analysis tools are usually not customized for handling the massive quantity, complex, dynamic, and distributed nature of location traces. To that end, in this paper, we provide a focused study of extracting energy-efficient transportation patterns from location traces. Specifically, we have the initial focus on a sequence of mobile recommendations. As a case study, we develop a mobile recommender system which has the ability in recommending a sequence of pick-up points for taxi drivers or a sequence of potential parking positions. The goal of this mobile recommendation system is to maximize the probability of business success. Along this line, we provide a Potential Travel Distance (PTD) function for evaluating each candidate sequence. This PTD function possesses a monotone property which can be used to effectively prune the search space. Based on this PTD function, we develop two algorithms, LCP and SkyRoute, for finding the recommended routes. Finally, experimental results show that the proposed system can provide effective mobile sequential recommendation and the knowledge extracted from location traces can be used for coaching drivers and leading to the efficient use of energy." ] }
1902.00623
2950876045
Cross-modal similarity search is a problem about designing a search system supporting querying across content modalities, e.g., using an image to search for texts or using a text to search for images. This paper presents a compact coding solution for efficient search, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in the single-modal similarity search. We propose a cross-modal quantization approach, which is among the early attempts to introduce quantization into cross-modal search. The major contribution lies in jointly learning the quantizers for both modalities through aligning the quantized representations for each pair of image and text belonging to a document. In addition, our approach simultaneously learns the common space for both modalities in which quantization is conducted to enable efficient and effective search using the Euclidean distance computed in the common space with fast distance table lookup. Experimental results compared with several competitive algorithms over three benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance.
Cross-view hashing @cite_28 defines the distance between documents in the Hamming space by considering the hash codes of all the modalities, and aligns it with the given similarity. Multi-view spectral hashing @cite_36 adopts a similar formulation but with a different optimization algorithm. These methods usually also involve the intra-document relation in an implicit way by considering the multi-modal document as an integrated whole object. There are other hashing methods exploring the inter-document relation about multi-modal representation , but not for cross-modal similarity search, such as composite hashing @cite_10 and effective multiple feature hashing @cite_20 .
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_36", "@cite_20" ], "mid": [ "1996219872", "2388114291", "1979644923", "2086958058" ], "abstract": [ "Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. The existing cross-media hashing approaches only aim at learning hash functions to preserve the intra-modality and inter-modality correlations, but do not directly capture the underlying semantic information of the multi-modal data. We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). As a result, the coupled dictionaries not only preserve the intra-similarity and inter-correlation among multi-modal data, but also contain dictionary atoms that are semantically discriminative (i.e., the data from the same category is reconstructed by the similar dictionary atoms). To perform fast cross-media retrieval, we learn hash functions which map data from the dictionary space to a low-dimensional Hamming space. Besides, we conjecture that a balanced representation is crucial in cross-media retrieval. We introduce multi-view features on the relatively weak'' modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. The experiments on two real-world data sets show that our DCDH and MV-DCDH outperform the state-of-the-art methods significantly on cross-media retrieval.", "Due to the storage and retrieval efficiency, hashing has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which enables efficient retrieval of images in response to text queries or vice versa, has received increasing attention recently. Most existing work on cross-modal hashing does not capture the spatial dependency of images and temporal dynamics of text sentences for learning powerful feature representations and cross-modal embeddings that mitigate the heterogeneity of different modalities. This paper presents a new Deep Visual-Semantic Hashing (DVSH) model that generates compact hash codes of images and sentences in an end-to-end deep learning architecture, which capture the intrinsic cross-modal correspondences between visual data and natural language. DVSH is a hybrid deep architecture that constitutes a visual-semantic fusion network for learning joint embedding space of images and text sentences, and two modality-specific hashing networks for learning hash functions to generate compact binary codes. Our architecture effectively unifies joint multimodal embedding and cross-modal hashing, which is based on a novel combination of Convolutional Neural Networks over images, Recurrent Neural Networks over sentences, and a structured max-margin objective that integrates all things together to enable learning of similarity-preserving and high-quality hash codes. Extensive empirical evidence shows that our DVSH approach yields state of the art results in cross-modal retrieval experiments on image-sentences datasets, i.e. standard IAPR TC-12 and large-scale Microsoft COCO.", "Cross media retrieval engines have gained massive popularity with rapid development of the Internet. Users may perform queries in a corpus consisting of audio, video, and textual information. To make such systems practically possible for large mount of multimedia data, two critical issues must be carefully considered: (a) reduce the storage as much as possible; (b) model the relationship of the heterogeneous media data. Recently academic community have proved that encoding the data into compact binary codes can drastically reduce the storage and computational cost. However, it is still unclear how to integrate multiple information sources properly into the binary code encoding scheme. In this paper, we study the cross media indexing problem by learning the discriminative hashing functions to map the multi-view datum into a shared hamming space. Not only meaningful within-view similarity is required to be preserved, we also incorporate the between-view correlations into the encoding scheme, where we map the similar points close together and push apart the dissimilar ones. To this end, we propose a novel hashing algorithm called Iterative Multi-View Hashing (IMVH) by taking these information into account simultaneously. To solve this joint optimization problem efficiently, we further develop an iterative scheme to deal with it by using a more flexible quantization model. In particular, an optimal alignment is learned to maintain the between-view similarity in the encoding scheme. And the binary codes are obtained by directly solving a series of binary label assignment problems without continuous relaxation to avoid the unnecessary quantization loss. In this way, the proposed algorithm not only greatly improves the retrieval accuracy but also performs strong robustness. An extensive set of experiments clearly demonstrates the superior performance of the proposed method against the state-of-the-art techniques on both multimodal and unimodal retrieval tasks.", "Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods." ] }
1902.00623
2950876045
Cross-modal similarity search is a problem about designing a search system supporting querying across content modalities, e.g., using an image to search for texts or using a text to search for images. This paper presents a compact coding solution for efficient search, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in the single-modal similarity search. We propose a cross-modal quantization approach, which is among the early attempts to introduce quantization into cross-modal search. The major contribution lies in jointly learning the quantizers for both modalities through aligning the quantized representations for each pair of image and text belonging to a document. In addition, our approach simultaneously learns the common space for both modalities in which quantization is conducted to enable efficient and effective search using the Euclidean distance computed in the common space with fast distance table lookup. Experimental results compared with several competitive algorithms over three benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance.
The relation is often used to learn a unified hash code, into which a hash function is learnt for each modality to map the feature. For example, Latent semantic sparse hashing @cite_24 applies the sign function on the joint space projected from the latent semantic representation learnt for each modality. Collective matrix factorization hashing @cite_3 finds the common (same) representation for an image-text pair via collective matrix factorization, and obtains the hash codes directly using the sign function on the common representation. Other methods exploring the intra-document relation include semantic topic multimodal hashing @cite_30 , semantics-preserving multi-view hashing @cite_16 , inter-media hashing @cite_10 and its accelerated version @cite_35 , and so on. Meanwhile, several attempts @cite_32 @cite_38 have been made based on the neural network which can also be combined with our approach to learn the common space.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_38", "@cite_32", "@cite_3", "@cite_24", "@cite_16", "@cite_10" ], "mid": [ "2086958058", "2388114291", "2512032049", "2213253086" ], "abstract": [ "Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods.", "Due to the storage and retrieval efficiency, hashing has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which enables efficient retrieval of images in response to text queries or vice versa, has received increasing attention recently. Most existing work on cross-modal hashing does not capture the spatial dependency of images and temporal dynamics of text sentences for learning powerful feature representations and cross-modal embeddings that mitigate the heterogeneity of different modalities. This paper presents a new Deep Visual-Semantic Hashing (DVSH) model that generates compact hash codes of images and sentences in an end-to-end deep learning architecture, which capture the intrinsic cross-modal correspondences between visual data and natural language. DVSH is a hybrid deep architecture that constitutes a visual-semantic fusion network for learning joint embedding space of images and text sentences, and two modality-specific hashing networks for learning hash functions to generate compact binary codes. Our architecture effectively unifies joint multimodal embedding and cross-modal hashing, which is based on a novel combination of Convolutional Neural Networks over images, Recurrent Neural Networks over sentences, and a structured max-margin objective that integrates all things together to enable learning of similarity-preserving and high-quality hash codes. Extensive empirical evidence shows that our DVSH approach yields state of the art results in cross-modal retrieval experiments on image-sentences datasets, i.e. standard IAPR TC-12 and large-scale Microsoft COCO.", "By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus, Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, and so on) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark data sets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.", "Multimodal hashing is essential to cross-media similarity search for its low storage cost and fast query speed. Most existing multimodal hashing methods embedded heterogeneous data into a common low-dimensional Hamming space, and then rounded the continuous embeddings to obtain the binary codes. Yet they usually neglect the inherent discrete nature of hashing for relaxing the discrete constraints, which will cause degraded retrieval performance especially for long codes. For this purpose, a novel Semantic Topic Multimodal Hashing (STMH) is developed by considering latent semantic information in coding procedure. It first discovers clustering patterns of texts and robust factorizes the matrix of images to obtain multiple semantic topics of texts and concepts of images. Then the learned multimodal semantic features are transformed into a common subspace by their correlations. Finally, each bit of unified hash code can be generated directly by figuring out whether a topic or concept is contained in a text or an image. Therefore, the obtained model by STMH is more suitable for hashing scheme as it directly learns discrete hash codes in the coding process. Experimental results demonstrate that the proposed method outperforms several state-of-the-art methods." ] }
1902.00623
2950876045
Cross-modal similarity search is a problem about designing a search system supporting querying across content modalities, e.g., using an image to search for texts or using a text to search for images. This paper presents a compact coding solution for efficient search, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in the single-modal similarity search. We propose a cross-modal quantization approach, which is among the early attempts to introduce quantization into cross-modal search. The major contribution lies in jointly learning the quantizers for both modalities through aligning the quantized representations for each pair of image and text belonging to a document. In addition, our approach simultaneously learns the common space for both modalities in which quantization is conducted to enable efficient and effective search using the Euclidean distance computed in the common space with fast distance table lookup. Experimental results compared with several competitive algorithms over three benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance.
Recently, a few techniques based on quantization are developed for cross-modal search. Quantized correlation hashing @cite_33 combines the hash function learning with the quantization by minimizing the inter-modality similarity disagreement as well as the binary quantization simultaneously. Compositional correlation quantization @cite_37 projects the multi-modal data into a common space, and then obtains a unified quantization representation for each document. Our approach, also exploring the intra-document relation, belongs to this cross-modal quantization category and achieves the state-of-the-art performance.
{ "cite_N": [ "@cite_37", "@cite_33" ], "mid": [ "1780444626", "2267050401", "1996219872", "2086958058" ], "abstract": [ "Efficient similarity retrieval from large-scale multimodal database is pervasive in current search systems with the big data tidal wave. To support queries across content modalities, the system should enable cross-modal correlation and computation-efficient indexing. While hashing methods have shown great potential in approaching this goal, current attempts generally failed to learn isomorphic hash codes in a seamless scheme, that is, they embed multiple modalities into a continuous isomorphic space and then threshold embeddings into binary codes, which incurred substantial loss of search quality. In this paper, we establish seamless multimodal hashing by proposing a novel Compositional Correlation Quantization (CCQ) model. Specifically, CCQ jointly finds correlation-maximal mappings that transform different modalities into an isomorphic latent space, and learns compositional quantizers that quantize the isomorphic latent features into compact binary codes. An optimization framework is developed to preserve both intra-modal similarity and inter-modal correlation while minimizing both reconstruction and quantization errors, which can be trained from both paired and unpaired data in linear time. A comprehensive set of experiments clearly show the superior effectiveness and efficiency of CCQ against the state-of-the-art techniques on both unimodal and cross-modal search tasks.", "Cross-modal hashing is designed to facilitate fast search across domains. In this work, we present a cross-modal hashing approach, called quantized correlation hashing (QCH), which takes into consideration the quantization loss over domains and the relation between domains. Unlike previous approaches that separate the optimization of the quantizer independent of maximization of domain correlation, our approach simultaneously optimizes both processes. The underlying relation between the domains that describes the same objects is established via maximizing the correlation between the hash codes across the domains. The resulting multi-modal objective function is transformed to a unimodal formalization, which is optimized through an alternative procedure. Experimental results on three real world datasets demonstrate that our approach outperforms the state-of-the-art multi-modal hashing methods.", "Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. The existing cross-media hashing approaches only aim at learning hash functions to preserve the intra-modality and inter-modality correlations, but do not directly capture the underlying semantic information of the multi-modal data. We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). As a result, the coupled dictionaries not only preserve the intra-similarity and inter-correlation among multi-modal data, but also contain dictionary atoms that are semantically discriminative (i.e., the data from the same category is reconstructed by the similar dictionary atoms). To perform fast cross-media retrieval, we learn hash functions which map data from the dictionary space to a low-dimensional Hamming space. Besides, we conjecture that a balanced representation is crucial in cross-media retrieval. We introduce multi-view features on the relatively weak'' modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. The experiments on two real-world data sets show that our DCDH and MV-DCDH outperform the state-of-the-art methods significantly on cross-media retrieval.", "Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods." ] }
1902.00438
2918540534
The use of background knowledge remains largely unexploited in many text classification tasks. In this work, we explore word taxonomies as means for constructing new semantic features, which may improve the performance and robustness of the learned classifiers. We propose tax2vec, a parallel algorithm for constructing taxonomy based features, and demonstrate its use on six short-text classification problems, including gender, age and personality type prediction, drug effectiveness and side effect prediction, and news topic prediction. The experimental results indicate that the interpretable features constructed using tax2vec can notably improve the performance of classifiers; the constructed features, in combination with fast, linear classifiers tested against strong baselines, such as hierarchical attention neural networks, achieved comparable or better classification results on short documents. Further, tax2vec can also serve for extraction of corpus-specific keywords. Finally, we investigated the semantic space of potential features where we observe a similarity with the well known Zipf's law.
Document classification is highly dependent on . In simple bag-of-words representations, the frequency (or a similar weight such as term frequency-inverse document frequency---tf-idf) of each word or @math -gram is considered as a separate feature. More advanced representations group words with similar meaning together. Such approaches include Latent Semantic Analysis @cite_41 , Latent Dirichlet Allocation @cite_12 , and more recently word embeddings @cite_59 . It has been previously demonstrated that context-aware algorithms significantly outperform the naive learning approaches @cite_57 . We refer to such semantic context as the .
{ "cite_N": [ "@cite_41", "@cite_59", "@cite_57", "@cite_12" ], "mid": [ "2131273899", "1965667542", "2963727650", "2789983685" ], "abstract": [ "Traditional term weighting schemes in text categorization, such as TF-IDF, only exploit the statistical information of terms in documents. Instead, in this paper, we propose a novel term weighting scheme by exploiting the semantics of categories and indexing terms. Specifically, the semantics of categories are represented by senses of terms appearing in the category labels as well as the interpretation of them by WordNet. Also, the weight of a term is correlated to its semantic similarity with a category. Experimental results on three commonly used data sets show that the proposed approach outperforms TF-IDF in the cases that the amount of training data is small or the content of documents is focused on well-defined categories. In addition, the proposed approach compares favorably with two previous studies.", "A novel probabilistic retrieval model is presented. It forms a basis to interpret the TF-IDF term weights as making relevance decisions. It simulates the local relevance decision-making for every location of a document, and combines all of these “local” relevance decisions as the “document-wide” relevance decision for the document. The significance of interpreting TF-IDF in this way is the potential to: (1) establish a unifying perspective about information retrieval as relevance decision-making; and (2) develop advanced TF-IDF-related term weights for future elaborate retrieval models. Our novel retrieval model is simplified to a basic ranking formula that directly corresponds to the TF-IDF term weights. In general, we show that the term-frequency factor of the ranking formula can be rendered into different term-frequency factors of existing retrieval systems. In the basic ranking formula, the remaining quantity - log p(rvt ∈ d) is interpreted as the probability of randomly picking a nonrelevant usage (denoted by r) of term t. Mathematically, we show that this quantity can be approximated by the inverse document-frequency (IDF). Empirically, we show that this quantity is related to IDF, using four reference TREC ad hoc retrieval data collections.", "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7 mIoU on PASCAL-Context, 85.9 mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45 , which is comparable with state-of-the-art approaches with over 10A— more layers. The source code for the complete system are publicly available1.", "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7 mIoU on PASCAL-Context, 85.9 mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpass the winning entry of COCO-Place Challenge in 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45 , which is comparable with state-of-the-art approaches with over 10 times more layers. The source code for the complete system are publicly available." ] }
1902.00438
2918540534
The use of background knowledge remains largely unexploited in many text classification tasks. In this work, we explore word taxonomies as means for constructing new semantic features, which may improve the performance and robustness of the learned classifiers. We propose tax2vec, a parallel algorithm for constructing taxonomy based features, and demonstrate its use on six short-text classification problems, including gender, age and personality type prediction, drug effectiveness and side effect prediction, and news topic prediction. The experimental results indicate that the interpretable features constructed using tax2vec can notably improve the performance of classifiers; the constructed features, in combination with fast, linear classifiers tested against strong baselines, such as hierarchical attention neural networks, achieved comparable or better classification results on short documents. Further, tax2vec can also serve for extraction of corpus-specific keywords. Finally, we investigated the semantic space of potential features where we observe a similarity with the well known Zipf's law.
In practical applications, features are constructed from various data sources, including texts @cite_27 , graphs @cite_9 , audio recordings and similar data @cite_43 . With the increasing computational power at one's disposal, automated feature construction methods are becoming prevalent. Here, the idea is that given some criterion, the feature constructor outputs a set of features selected according to the criterion. For example, the tf-idf feature construction algorithm, applied to a given document corpus, can automatically construct hundreds of thousands of n-gram features in a matter of minutes on an average of-the-shelf laptop.
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_43" ], "mid": [ "2135733427", "2018083238", "2360967250", "2090985198" ], "abstract": [ "Feature construction is an effort to transform the input space of classification problems in order to improve the classification performance. Feature construction is particularly important for classifier inducers that cannot transform their input space intrinsically. This paper proposes GPMFC, a multiple-feature construction system for classification problems using genetic programming (GP). This paper takes a nonwrapper approach by introducing a filter-based measure of goodness for constructed features. The constructed, high-level features are functions of original input features. These functions are evolved by GP using an entropy-based fitness function that maximizes the purity of class intervals. A decomposable objective function is proposed so that the system is able to construct multiple high-level features for each problem. The constructed features are used to transform the original input space to a new space with better separability. Extensive experiments are conducted on a number of benchmark problems and symbolic learning classifiers. The results show that, in most cases, the new approach is highly effective in increasing the classification performance in rule-based and decision tree classifiers. The constructed features help improve the learning performance of symbolic learners. The constructed features, however, may lack intelligibility.", "There has been an increasing attention on learning feature representations from the complex, high-dimensional audio data applied in various music information retrieval (MIR) problems. Unsupervised feature learning techniques, such as sparse coding and deep belief networks have been utilized to represent music information as a term-document structure comprising of elementary audio codewords. Despite the widespread use of such bag-of-frames (BoF) model, few attempts have been made to systematically compare different component settings. Moreover, whether techniques developed in the text retrieval community are applicable to audio codewords is poorly understood. To further our understanding of the BoF model, we present in this paper a comprehensive evaluation that compares a large number of BoF variants on three different MIR tasks, by considering different ways of low-level feature representation, codebook construction, codeword assignment, segment-level and song-level feature pooling, tf-idf term weighting, power normalization, and dimension reduction. Our evaluations lead to the following findings: 1) modeling music information by two levels of abstraction improves the result for difficult tasks such as predominant instrument recognition, 2) tf-idf weighting and power normalization improve system performance in general, 3) topic modeling methods such as latent Dirichlet allocation does not work for audio codewords.", "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models. To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs). Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7 in precision, 11.5 in recall, and 14.2 in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9 in F1.", "Rapid movement generation models are described in the literature as an efficient tool to apprehend the handwriting behavior. Fields of application are diverse, including handwriting description, regeneration, and more recently OCR. In this paper, we propose a grapheme-based approach to offline Arabic writer identification and verification. Rather than extracting naturel graphemes from a training corpus using segmentation and clustering, it synthesizes its own graphemes based on the beta-elliptic model. Originality lies in the independence of the grapheme codebook from any training process, and the use of a model instead. One full and four partial codebooks are generated and tested. Using feature selection, raw codebooks are reduced in size with respect to FDR, FDR and cross-correlation, and random subsampling criteria. A total of 60 feature vectors are extracted using template matching, and evaluated with 411 individual writers from the IFN ENIT database. The results presented in this study demonstrated the wide representativity and the good generalization capability of synthetic codebooks. We obtained a top1 rate=90.02 and a top5 rate=96.35 for writer identification, and an EER=2.1 for writer verification. Our approach showed better properties than most of the surveyed techniques in terms of supported corpus size and identification rates. To the best of our knowledge, this study is among the first to exploit the concept of model-based synthetic codebooks in writer identification and verification. HighlightsUniversal synthetic codebooks are used for writer identification and verification.The beta-elliptic handwriting generation model is used to create the codebooks.Neither corpus-driven training nor prior segmentation is needed in this approach.Synthetic codebook demonstrated good generalization capabilities for Arabic." ] }
1902.00438
2918540534
The use of background knowledge remains largely unexploited in many text classification tasks. In this work, we explore word taxonomies as means for constructing new semantic features, which may improve the performance and robustness of the learned classifiers. We propose tax2vec, a parallel algorithm for constructing taxonomy based features, and demonstrate its use on six short-text classification problems, including gender, age and personality type prediction, drug effectiveness and side effect prediction, and news topic prediction. The experimental results indicate that the interpretable features constructed using tax2vec can notably improve the performance of classifiers; the constructed features, in combination with fast, linear classifiers tested against strong baselines, such as hierarchical attention neural networks, achieved comparable or better classification results on short documents. Further, tax2vec can also serve for extraction of corpus-specific keywords. Finally, we investigated the semantic space of potential features where we observe a similarity with the well known Zipf's law.
Feature selection thus filters out the (unnecessary) features, with the aim of yielding a compact, information-rich representation of the unstructured input. There exist many approaches to feature selection. They can be based on the individual feature's information content, correlation, significance etc. @cite_64 . Feature selection is for example relevant in biological data sets, where e.g., only a handful of the key gene markers are of interest, and can be identified by assessing the impact of individual features on the target space @cite_46 .
{ "cite_N": [ "@cite_46", "@cite_64" ], "mid": [ "1952835952", "2136051823", "2083666679", "2056168656" ], "abstract": [ "Nowadays, many disciplines have to deal with big datasets that additionally involve a high number of features. Feature selection methods aim at eliminating noisy, redundant, or irrelevant features that may deteriorate the classification performance. However, traditional methods lack enough scalability to cope with datasets of millions of instances and extract successful results in a delimited time. This paper presents a feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets. The algorithm decomposes the original dataset in blocks of instances to learn from them in the map phase; then, the reduce phase merges the obtained partial results into a final vector of feature weights, which allows a flexible application of the feature selection procedure using a threshold to determine the selected subset of features. The feature selection method is evaluated by using three well-known classifiers (SVM, Logistic Regression, and Naive Bayes) implemented within the Spark framework to address big data problems. In the experiments, datasets up to 67 millions of instances and up to 2000 attributes have been managed, showing that this is a suitable framework to perform evolutionary feature selection, improving both the classification accuracy and its runtime when dealing with big data problems.", "Feature selection is an important technique for data mining. Despite its importance, most studies of feature selection are restricted to batch learning. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale applications. Most existing studies of online learning require accessing all the attributes features of training instances. Such a classical setting is not always appropriate for real-world applications when data instances are of high dimensionality or it is expensive to acquire the full set of attributes features. To address this limitation, we investigate the problem of online feature selection (OFS) in which an online learner is only allowed to maintain a classifier involved only a small and fixed number of features. The key challenge of online feature selection is how to make accurate prediction for an instance using a small number of active features. This is in contrast to the classical setup of online learning where all the features can be used for prediction. We attempt to tackle this challenge by studying sparsity regularization and truncation techniques. Specifically, this article addresses two different tasks of online feature selection: 1) learning with full input, where an learner is allowed to access all the features to decide the subset of active features, and 2) learning with partial input, where only a limited number of features is allowed to be accessed for each instance by the learner. We present novel algorithms to solve each of the two problems and give their performance analysis. We evaluate the performance of the proposed algorithms for online feature selection on several public data sets, and demonstrate their applications to real-world problems including image classification in computer vision and microarray gene expression analysis in bioinformatics. The encouraging results of our experiments validate the efficacy and efficiency of the proposed techniques.", "By removing the irrelevant and redundant features, feature selection aims to find a compact representation of the original feature with good generalization ability. With the prevalence of unlabeled data, unsupervised feature selection has shown to be effective in alleviating the curse of dimensionality, and is essential for comprehensive analysis and understanding of myriads of unlabeled high dimensional data. Motivated by the success of low-rank representation in subspace clustering, we propose a regularized self-representation (RSR) model for unsupervised feature selection, where each feature can be represented as the linear combination of its relevant features. By using L 2 , 1 -norm to characterize the representation coefficient matrix and the representation residual matrix, RSR is effective to select representative features and ensure the robustness to outliers. If a feature is important, then it will participate in the representation of most of other features, leading to a significant row of representation coefficients, and vice versa. Experimental analysis on synthetic and real-world data demonstrates that the proposed method can effectively identify the representative features, outperforming many state-of-the-art unsupervised feature selection methods in terms of clustering accuracy, redundancy reduction and classification accuracy. HighlightsA regularized self-representation (RSR) model is proposed for unsupervised feature selection.An iterative reweighted least-squares algorithm is proposed to solve the RSR model.The proposed method shows superior performance to state-of-the-art.", "With the advent of high dimensionality, adequate identification of relevant features of the data has become indispensable in real-world scenarios. In this context, the importance of feature selection is beyond doubt and different methods have been developed. However, with such a vast body of algorithms available, choosing the adequate feature selection method is not an easy-to-solve question and it is necessary to check their effectiveness on different situations. Nevertheless, the assessment of relevant features is difficult in real datasets and so an interesting option is to use artificial data. In this paper, several synthetic datasets are employed for this purpose, aiming at reviewing the performance of feature selection methods in the presence of a crescent number or irrelevant features, noise in the data, redundancy and interaction between attributes, as well as a small ratio between number of samples and number of features. Seven filters, two embedded methods, and two wrappers are applied over eleven synthetic datasets, tested by four classifiers, so as to be able to choose a robust method, paving the way for its application to real datasets." ] }
1902.00438
2918540534
The use of background knowledge remains largely unexploited in many text classification tasks. In this work, we explore word taxonomies as means for constructing new semantic features, which may improve the performance and robustness of the learned classifiers. We propose tax2vec, a parallel algorithm for constructing taxonomy based features, and demonstrate its use on six short-text classification problems, including gender, age and personality type prediction, drug effectiveness and side effect prediction, and news topic prediction. The experimental results indicate that the interpretable features constructed using tax2vec can notably improve the performance of classifiers; the constructed features, in combination with fast, linear classifiers tested against strong baselines, such as hierarchical attention neural networks, achieved comparable or better classification results on short documents. Further, tax2vec can also serve for extraction of corpus-specific keywords. Finally, we investigated the semantic space of potential features where we observe a similarity with the well known Zipf's law.
In this section we discuss briefly the works that influenced the development of the proposed approach. One of the most elegant ways to learn from graphs is by transforming them into propositional tables, which are a suitable input for many down-stream learning algorithms. Recent attempts to vectorization of graphs include node2vec @cite_18 , an algorithm for constructing features from homogeneous networks; its extension to heterogeneous networks metapath2vec @cite_30 ; mol2vec @cite_53 , a vectorization algorithm focused on molecular data; struc2vec @cite_24 , a graph vectorization algorithm based on homophily relations between nodes, and more. All of these approaches are non-symbolic, as the obtained vectorized information (embeddings) are not interpretable. Similarly, recently introduced graph-convolutional neural networks also yield local node embeddings, which also take node feature vectors into account @cite_26 @cite_5 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_53", "@cite_24", "@cite_5" ], "mid": [ "2796167946", "2614812929", "2798785997", "2804115458" ], "abstract": [ "Celebrated and its fruitful variants are powerful models to achieve excellent performance on the tasks that map sequences to sequences. However, these are many machine learning tasks with inputs naturally represented in a form of graphs, which imposes significant challenges to existing Seq2Seq models for lossless conversion from its graph form to the sequence. In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task, and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.", "We propose a new method for embedding graphs while preserving directed edge information. Learning such continuous-space vector representations (or embeddings) of nodes in a graph is an important first step for using network information (from social networks, user-item graphs, knowledge bases, etc.) in many machine learning tasks. Unlike previous work, we (1) explicitly model an edge as a function of node embeddings, and we (2) propose a novel objective, the graph likelihood, which contrasts information from sampled random walks with non-existent edges. Individually, both of these contributions improve the learned representations, especially when there are memory constraints on the total size of the embeddings. When combined, our contributions enable us to significantly improve the state-of-the-art by learning more concise representations that better preserve the graph structure. We evaluate our method on a variety of link-prediction task including social networks, collaboration networks, and protein interactions, showing that our proposed method learn representations with error reductions of up to 76 and 55 , on directed and undirected graphs. In addition, we show that the representations learned by our method are quite space efficient, producing embeddings which have higher structure-preserving accuracy but are 10 times smaller.", "Network embedding methodologies, which learn a distributed vector representation for each vertex in a network, have attracted considerable interest in recent years. Existing works have demonstrated that vertex representation learned through an embedding method provides superior performance in many real-world applications, such as node classification, link prediction, and community detection. However, most of the existing methods for network embedding only utilize topological information of a vertex, ignoring a rich set of nodal attributes (such as, user profiles of an online social network, or textual contents of a citation network), which is abundant in all real-life networks. A joint network embedding that takes into account both attributional and relational information entails a complete network information and could further enrich the learned vector representations. In this work, we present Neural-Brane, a novel Neural Bayesian Personalized Ranking based Attributed Network Embedding. For a given network, Neural-Brane extracts latent feature representation of its vertices using a designed neural network model that unifies network topological information and nodal attributes; Besides, it utilizes Bayesian personalized ranking objective, which exploits the proximity ordering between a similar node-pair and a dissimilar node-pair. We evaluate the quality of vertex embedding produced by Neural-Brane by solving the node classification and clustering tasks on four real-world datasets. Experimental results demonstrate the superiority of our proposed method over the state-of-the-art existing methods.", "Recently a variety of methods have been developed to encode graphs into low-dimensional vectors that can be easily exploited by machine learning algorithms. The majority of these methods start by embedding the graph nodes into a low-dimensional vector space, followed by using some scheme to aggregate the node embeddings. In this work, we develop a new approach to learn graph-level representations, which includes a combination of unsupervised and supervised learning components. We start by learning a set of node representations in an unsupervised fashion. Graph nodes are mapped into node sequences sampled from random walk approaches approximated by the Gumbel-Softmax distribution. Recurrent neural network (RNN) units are modified to accommodate both the node representations as well as their neighborhood information. Experiments on standard graph classification benchmarks demonstrate that our proposed approach achieves superior or comparable performance relative to the state-of-the-art algorithms in terms of convergence speed and classification accuracy. We further illustrate the effectiveness of the different components used by our approach." ] }
1902.00369
2912289145
Accurate segmentation of brain tissue in magnetic resonance images (MRI) is a difficult task due to different types of brain abnormalities. In this paper, we review the deformation method focus on the construction of diffeomorphisms, address clearly a new formation of the deformation problem for moving domains, and we apply it in natural images, face images and MRI brain images. And we use a new method to construct diffeomorphisms through a completely different approach. The idea is to control directly the Jacobian determinant and the curl vector of a transformation and use them as one CNN channel with other modalities(T1-weighted, T1-IR and T2-FLAIR) to get more accurate results of brain segmentation. More importantly, we discuss the influence of some optimization parameters to precision analysis of MRI brain segmentation by both numerical experiments and theoretical analysis. We test this method on the IBSR dataset and MRBrainS18 dataset based on VoxResNet and prove the influence of three parameters on the accuracy of MRI brain segmentation.Finally, we also compare the segmentation performance of our method in two networks, VoxResNet and 3D U-Net network. We believe the proposed method can advance the performance in brain segmentation and clinical diagnosis.
This is known as the deformation method. A series of applications were made including adaptive moving grid @cite_7 , and steady euler flow calculations @cite_5 .In addition,the dynamic version of the deformation method on fixed domains was developed in @cite_7 based on solving Poisson's equations. In 2004,The Least-Squares Finite Element Method was first applied to solve the div-curl system in @cite_0 which extends the deformation method of grid deneration to moving domains.This version constructs a diffeomorphism @math , for @math
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_7" ], "mid": [ "2028949045", "2079263908", "2170167891", "2060306687" ], "abstract": [ "An adaptive remeshing procedure based on a cell volume deformation method is presented. Starting with an initial grid, this method offers direct cell volume control through the specification of the transformation Jacobian. Grid points are moved with appropriate grid velocities so that the specified cell volume distribution can be achieved at the end of the grid movement without adding or removing grid points. The grid velocities are determined by solving a scalar Poisson equation. This method is applied to solving the compressible Euler equations. Computational test cases of transonic flow over an airfoil are presented and demonstrate the desired control of grid sizes across shock waves.", "This paper deals with a high-order accurate discontinuous finite element method for the numerical solution of the compressible Navier?Stokes equations. We extend a discontinuous finite element discretization originally considered for hyperbolic systems such as the Euler equations to the case of the Navier?Stokes equations by treating the viscous terms with a mixed formulation. The method combines two key ideas which are at the basis of the finite volume and of the finite element method, the physics of wave propagation being accounted for by means of Riemann problems and accuracy being obtained by means of high-order polynomial approximations within elements. As a consequence the method is ideally suited to compute high-order accurate solution of the Navier?Stokes equations on unstructured grids. The performance of the proposed method is illustrated by computing the compressible viscous flow on a flat plate and around a NACA0012 airfoil for several flow regimes using constant, linear, quadratic, and cubic elements.", "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in (1998) and Trouve (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0???1=I 1 where ?=?1 is the end point at t= 1 of curve ? t , t?[0, 1] satisfying .? t =v t (? t ), t? [0,1] with ?0=id. The variational problem takes the form @math where ?v t? V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ?·?L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t?[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ?0 1?v t? V dt on the geodesic shortest paths.", "In this paper we analyze the problem of adaptivity for one-step numerical methods for solving ODEs, both IVPs and BVPs, with a view to generating grids of minimal computational cost for which the local error is below a prescribed tolerance (optimal grids). The grids are generated by introducing an auxiliary independent variable τ and finding a grid deformation map, t=Θ(τ), that maps an equidistant grid τj to a non-equidistant grid in the original independent variable, tj . An optimal deformation map Θ is determined by a variational approach. Finally, we investigate the cost of the solution procedure and compare it to the cost of using equidistant grids. We show that if the principal error function is non-constant, an adaptive method is always more efficient than a non-adaptive method." ] }
1902.00369
2912289145
Accurate segmentation of brain tissue in magnetic resonance images (MRI) is a difficult task due to different types of brain abnormalities. In this paper, we review the deformation method focus on the construction of diffeomorphisms, address clearly a new formation of the deformation problem for moving domains, and we apply it in natural images, face images and MRI brain images. And we use a new method to construct diffeomorphisms through a completely different approach. The idea is to control directly the Jacobian determinant and the curl vector of a transformation and use them as one CNN channel with other modalities(T1-weighted, T1-IR and T2-FLAIR) to get more accurate results of brain segmentation. More importantly, we discuss the influence of some optimization parameters to precision analysis of MRI brain segmentation by both numerical experiments and theoretical analysis. We test this method on the IBSR dataset and MRBrainS18 dataset based on VoxResNet and prove the influence of three parameters on the accuracy of MRI brain segmentation.Finally, we also compare the segmentation performance of our method in two networks, VoxResNet and 3D U-Net network. We believe the proposed method can advance the performance in brain segmentation and clinical diagnosis.
The success of the deformation method of grid generation actually relies on the div-curl system, which shows the great importance of the curl vector ( @math ). Hence, in the paper of 2016 @cite_15 , Xi Chen proposed another completely new approach by directly applying JD(the Jacobian determinant @math ) and CV(the curl vector @math ).They used calculus of variation to formulate a new variational method with prescribed JD and CV of constructing transformations @cite_15 . A recovering experiment was designed by the new variational method with the same JD and CV.
{ "cite_N": [ "@cite_15" ], "mid": [ "1437816530", "2028949045", "2060306687", "1969724971" ], "abstract": [ "Adaptive grid generation is an active research topic for numer- ical solution of differential equations. In this paper, we propose a variational method which generates transformations with prescribed Jacobian determinant and curl. Then we use this transformation to achieve adaptive grid generation task, and show the importance of curl in a transformation.", "An adaptive remeshing procedure based on a cell volume deformation method is presented. Starting with an initial grid, this method offers direct cell volume control through the specification of the transformation Jacobian. Grid points are moved with appropriate grid velocities so that the specified cell volume distribution can be achieved at the end of the grid movement without adding or removing grid points. The grid velocities are determined by solving a scalar Poisson equation. This method is applied to solving the compressible Euler equations. Computational test cases of transonic flow over an airfoil are presented and demonstrate the desired control of grid sizes across shock waves.", "In this paper we analyze the problem of adaptivity for one-step numerical methods for solving ODEs, both IVPs and BVPs, with a view to generating grids of minimal computational cost for which the local error is below a prescribed tolerance (optimal grids). The grids are generated by introducing an auxiliary independent variable τ and finding a grid deformation map, t=Θ(τ), that maps an equidistant grid τj to a non-equidistant grid in the original independent variable, tj . An optimal deformation map Θ is determined by a variational approach. Finally, we investigate the cost of the solution procedure and compare it to the cost of using equidistant grids. We show that if the principal error function is non-constant, an adaptive method is always more efficient than a non-adaptive method.", "A new method for generating adaptive moving grids is formulated based on physical quantities. Level set functions are used to construct the adaptive grids, which are solutions of the standard level set evolution equation with the Cartesian coordinates as initial values. The intersection points of the level sets of the evolving functions form a new grid at each time. The velocity vector in the evolution equation is chosen according to a monitor function and is equal to the node velocity. A uniform grid is then deformed to a moving grid with desired cell volume distribution at each time. The method achieves precise control over the Jacobian determinant of the grid mapping as the traditional deformation method does. The new method is consistent with the level set approach to dynamic moving interface problems." ] }
1902.00563
2914133091
Keeping the electricity production in balance with the actual demand is becoming a difficult and expensive task in spite of an involvement of experienced human operators. This is due to the increasing complexity of the electric power grid system with the intermittent renewable production as one of the contributors. A beforehand information about an occurring imbalance can help the transmission system operator to adjust the production plans, and thus ensure a high security of supply by reducing the use of costly balancing reserves, and consequently reduce undesirable fluctuations of the 50 Hz power system frequency. In this paper, we introduce the relatively new problem of an intra-hour imbalance forecasting for the transmission system operator (TSO). We focus on the use case of the Norwegian TSO, Statnett. We present a complementary imbalance forecasting tool that is able to support the TSO in determining the trend of future imbalances, and show the potential to proactively alleviate imbalances with a higher accuracy compared to the contemporary solution.
The first publication @cite_4 discusses and shows the limitations and insufficiency of richly applied but basic forecasting techniques such as ARIMA and exponential smoothing, because of the non-periodic, non-stationary and noisy character of imbalance time-series. Instead, the contemporary state-of-the-art artificial neural networks (ANN) are applied to uncover the non-linearity and irregularity of the data, and predict the daily imbalance medians. Presented are improvements compared to methods based on a linear regression, and the fact that none of the neural networks provided optimal predictions for all market conditions is discussed. Two use cases were evaluated: prediction of daily medians with three months training and one month testing window, and prediction of six values for each day with four week training and one week testing window. The following predictor variables were employed: demand forecast, demand forecast error, accepted bid volumes, accepted offer volumes, forward trades, gate closure imbalance volume, accepted offers and bids, imbalance prices and day of the week. However, it is not clear whether historical imbalances were utilized as one of the input features of the models.
{ "cite_N": [ "@cite_4" ], "mid": [ "2146552111", "2603648311", "1989130706", "789578048" ], "abstract": [ "Abstract A feedforward neural network which can account for nonlinear relationships was used to compare ARIMA and neural network price forecasting performance. Data used was monthly live cattle and wheat prices from 1950 through 1990. The experiment was repeated seven times for successive three year periods. This involved using a walk forward or sliding window approach from 1970 through 1990 which generated out of sample results. The neural network models achieved a 27 percent and 56 percent lower mean squared error than ARIMA model. The absolute mean error and mean absolute percent error were also lower for the neural network models. The neural network models were able to capture a significant number of turning points for both wheat and cattle, while the ARIMA model was only able to do so for wheat. Since this forecasting method is not problem specific and uses only past prices, it can be applied to other forecasting problems such as stocks and other financial prices.", "textabstractForecasting financial time series using past observations has been a significant topic of interest. While temporal relationships in the data exist, they are difficult to analyze and predict accurately due to the non-linear trends and noise present in the series. We propose to learn these dependencies by a convolutional neural network. In particular the focus is on multivariate time series forecasting. Effectively, we use multiple financial time series as input in the neural network, thus conditioning the forecast of a time series x(t) on both its own history as well as that of a second (or third) time series y(t). Training a model on multiple stock series allows the network to exploit the correlation structure between these series so that the network can learn the market dynamics in shorter sequences of data. We show that long-term temporal dependencies in and between financial time series can be learned by means of a deep convolutional neural network based on the WaveNet model [2]. The network makes use of dilated convolutions applied to multiple time series so that the receptive field of the network is wide enough to learn both short and long-term dependencies. The architecture includes batch normalization and uses a 1 × k convolution with parametrized skip connections from the input time series as well as the time series we condition on, in this way learning long-term interdependencies in an efficient manner [1]. This improves the forecast, while at the same time limiting the requirement for a long historical price series and reducing the noise. Knowing the strong performance of CNNs on classification problems we show that they can be applied successfully to forecasting financial time series, without the need of large samples of data. We compare the performance of the WaveNet model to a state-of-the-art fully convolutional network (FCN), and an autoregressive model popular in econometrics and show that our model is much better able to learn important dependencies in between financial time series resulting in a more robust and accurate forecast.", "A suitable combination of linear and nonlinear models provides a more accurate prediction model than an individual linear or nonlinear model for forecasting time series data originating from various applications. The linear autoregressive integrated moving average (ARIMA) and nonlinear artificial neural network (ANN) models are explored in this paper to devise a new hybrid ARIMA-ANN model for the prediction of time series data. Many of the hybrid ARIMA-ANN models which exist in the literature apply an ARIMA model to given time series data, consider the error between the original and the ARIMA-predicted data as a nonlinear component, and model it using an ANN in different ways. Though these models give predictions with higher accuracy than the individual models, there is scope for further improvement in the accuracy if the nature of the given time series is taken into account before applying the models. In the work described in this paper, the nature of volatility was explored using a moving-average filter, and then an ARIMA and an ANN model were suitably applied. Using a simulated data set and experimental data sets such as sunspot data, electricity price data, and stock market data, the proposed hybrid ARIMA-ANN model was applied along with individual ARIMA and ANN models and some existing hybrid ARIMA-ANN models. The results obtained from all of these data sets show that for both one-step-ahead and multistep-ahead forecasts, the proposed hybrid model has higher prediction accuracy.", "Graphical abstractDisplay Omitted HighlightsWe predict maximum and minimum day stock prices of power companies.The methodology is based on attribute selection and time series prediction.The most relevant attributes are determined by correlation analysis.The actual time series prediction is carried out by neural networks.The proposed methodology provides very good results. Time series forecasting has been widely used to determine future prices of stocks, and the analysis and modeling of finance time series is an important task for guiding investors' decisions and trades. Nonetheless, the prediction of prices by means of a time series is not trivial and it requires a thorough analysis of indexes, variables and other data. In addition, in a dynamic environment such as the stock market, the non-linearity of the time series is a pronounced characteristic, and this immediately affects the efficacy of stock price forecasts. Thus, this paper aims at proposing a methodology that forecasts the maximum and minimum day stock prices of three Brazilian power distribution companies, which are traded in the Sao Paulo Stock Exchange BM&FBovespa. When compared to the other papers already published in the literature, one of the main contributions and novelty of this paper is the forecast of the range of closing prices of Brazilian power distribution companies' stocks. As a result of its application, investors may be able to define threshold values for their stock trades. Moreover, such a methodology may be of great interest to home brokers who do not possess ample knowledge to invest in such companies. The proposed methodology is based on the calculation of distinct features to be analysed by means of attribute selection, defining the most relevant attributes to predict the maximum and minimum day stock prices of each company. Then, the actual prediction was carried out by Artificial Neural Networks (ANNs), which had their performances evaluated by means of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) calculations. The proposed methodology for addressing the problem of prediction of maximum and minimum day stock prices for Brazilian distribution companies is effective. In addition, these results were only possible to be achieved due to the combined use of attribute selection by correlation analysis and ANNs." ] }
1902.00489
2911903816
Recent approaches to English-language sentence compression rely on parallel corpora consisting of sentence-compression pairs. However, a sentence may be shortened in many different ways, which each might be suited to the needs of a particular application. Therefore, in this work, we collect and model crowdsourced judgements of the acceptability of many possible sentence shortenings. We then show how a model of such judgements can be used to support a flexible approach to the compression task. We release our model and dataset for future work.
Researchers have been studying extractive sentence compression for nearly two decades @cite_24 @cite_37 @cite_7 . Recent approaches are often based on a large compression corpus, https: github.com google-research-datasets sentence-compression which was automatically constructed by using news headlines to identify gold standard'' shortenings @cite_43 . State-of-the-art models trained on this dataset @cite_7 @cite_45 @cite_6 can reproduce gold compressions (i.e. perfect token-for-token match) with accuracy higher than 30
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_6", "@cite_24", "@cite_43", "@cite_45" ], "mid": [ "2125527113", "2251656952", "2163117351", "2511538013" ], "abstract": [ "A well-recognized limitation of research on supervised sentence compression is the dearth of available training data. We propose a new and bountiful resource for such training data, which we obtain by mining the revision history of Wikipedia for sentence compressions and expansions. Using only a fraction of the available Wikipedia data, we have collected a training corpus of over 380,000 sentence pairs, two orders of magnitude larger than the standardly used Ziff-Davis corpus. Using this new-found data, we propose a novel lexicalized noisy channel model for sentence compression, achieving improved results in grammaticality and compression rate criteria with a slight decrease in importance.", "A major challenge in supervised sentence compression is making use of rich feature representations because of very scarce parallel data. We address this problem and present a method to automatically build a compression corpus with hundreds of thousands of instances on which deletion-based algorithms can be trained. In our corpus, the syntactic trees of the compressions are subtrees of their uncompressed counterparts, and hence supervised systems which require a structural alignment between the input and output can be successfully trained. We also extend an existing unsupervised compression method with a learning module. The new system uses structured prediction to learn from lexical, syntactic and other features. An evaluation with human raters shows that the presented data harvesting method indeed produces a parallel corpus of high quality. Also, the supervised system trained on this corpus gets high scores both from human raters and in an automatic evaluation setting, significantly outperforming a strong baseline.", "We present a substitution-only approach to sentence compression which \"tightens\" a sentence by reducing its character length. Replacing phrases with shorter paraphrases yields paraphrastic compressions as short as 60 of the original length. In support of this task, we introduce a novel technique for re-ranking paraphrases extracted from bilingual corpora. At high compression rates paraphrastic compressions outperform a state-of-the-art deletion model in an oracle experiment. For further compression, deleting from oracle paraphrastic compressions preserves more meaning than deletion alone. In either setting, paraphrastic compression shows promise for surpassing deletion-only methods.", "We introduce a manually-created, multireference dataset for abstractive sentence and short paragraph compression. First, we examine the impact of singleand multi-sentence level editing operations on human compression quality as found in this corpus. We observe that substitution and rephrasing operations are more meaning preserving than other operations, and that compressing in context improves quality. Second, we systematically explore the correlations between automatic evaluation metrics and human judgments of meaning preservation and grammaticality in the compression task, and analyze the impact of the linguistic units used and precision versus recall measures on the quality of the metrics. Multi-reference evaluation metrics are shown to offer significant advantage over single reference-based metrics." ] }
1902.00489
2911903816
Recent approaches to English-language sentence compression rely on parallel corpora consisting of sentence-compression pairs. However, a sentence may be shortened in many different ways, which each might be suited to the needs of a particular application. Therefore, in this work, we collect and model crowdsourced judgements of the acceptability of many possible sentence shortenings. We then show how a model of such judgements can be used to support a flexible approach to the compression task. We release our model and dataset for future work.
However, because a sentence may be compressed in many ways (Table ), this work introduces human acceptability judgements as a new and more flexible form of supervision for the sentence compression task. Our approach is thus closely connected to research which seeks to model human judgements of the well-formedness of a sentence @cite_40 @cite_28 @cite_15 @cite_16 . Unlike such studies, our work is strictly concerned with human perceptions of shortened sentences. We compare our model to in . Our work also solicits human judgements of shortenings from naturally-occurring news text, instead of sentences drawn from syntax textbooks @cite_4 @cite_16 or created via automatic translation @cite_15 .
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_40", "@cite_15", "@cite_16" ], "mid": [ "2511538013", "2251654079", "2335367650", "2251656952" ], "abstract": [ "We introduce a manually-created, multireference dataset for abstractive sentence and short paragraph compression. First, we examine the impact of singleand multi-sentence level editing operations on human compression quality as found in this corpus. We observe that substitution and rephrasing operations are more meaning preserving than other operations, and that compressing in context improves quality. Second, we systematically explore the correlations between automatic evaluation metrics and human judgments of meaning preservation and grammaticality in the compression task, and analyze the impact of the linguistic units used and precision versus recall measures on the quality of the metrics. Multi-reference evaluation metrics are shown to offer significant advantage over single reference-based metrics.", "We present an LSTM approach to deletion-based sentence compression where the task is to translate a sentence into a sequence of zeros and ones, corresponding to token deletion decisions. We demonstrate that even the most basic version of the system, which is given no syntactic information (no PoS or NE tags, or dependencies) or desired compression length, performs surprisingly well: around 30 of the compressions from a large test set could be regenerated. We compare the LSTM system with a competitive baseline which is trained on the same amount of data but is additionally provided with all kinds of linguistic features. In an experiment with human raters the LSTMbased model outperforms the baseline achieving 4.5 in readability and 3.8 in informativeness.", "We present a discriminative model for single-document summarization that integrally combines compression and anaphoricity constraints. Our model selects textual units to include in the summary based on a rich set of sparse features whose weights are learned on a large corpus. We allow for the deletion of content within a sentence when that deletion is licensed by compression rules; in our framework, these are implemented as dependencies between subsentential units of text. Anaphoricity constraints then improve cross-sentence coherence by guaranteeing that, for each pronoun included in the summary, the pronoun's antecedent is included as well or the pronoun is rewritten as a full mention. When trained end-to-end, our final system outperforms prior work on both ROUGE as well as on human judgments of linguistic quality.", "A major challenge in supervised sentence compression is making use of rich feature representations because of very scarce parallel data. We address this problem and present a method to automatically build a compression corpus with hundreds of thousands of instances on which deletion-based algorithms can be trained. In our corpus, the syntactic trees of the compressions are subtrees of their uncompressed counterparts, and hence supervised systems which require a structural alignment between the input and output can be successfully trained. We also extend an existing unsupervised compression method with a learning module. The new system uses structured prediction to learn from lexical, syntactic and other features. An evaluation with human raters shows that the presented data harvesting method indeed produces a parallel corpus of high quality. Also, the supervised system trained on this corpus gets high scores both from human raters and in an automatic evaluation setting, significantly outperforming a strong baseline." ] }
1902.00505
2913527169
This paper proposes a novel algorithm which learns a formal regular grammar from real-world continuous data, such as videos or other streaming data. Learning latent terminals, non-terminals, and productions rules directly from streaming data allows the construction of a generative model capturing sequential structures with multiple possibilities. Our model is fully differentiable, and provides easily interpretable results which are important in order to understand the learned structures. It outperforms the state-of-the-art on several challenging datasets and is more accurate for forecasting future activities in videos. We plan to open-source the code.
Chomsky grammars @cite_2 @cite_3 are designed to represent functional linguistic relationships. They have found wide applications in defining programming languages, natural language understanding, and understanding of images and videos .
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "2115912164", "2279996368", "1568155371", "877909479" ], "abstract": [ "The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages).", "We study disambiguating of pronoun references in Winograd Schemas, which are part of the Winograd Schema Challenge, a proposed replacement for the Turing test. In particular we consider sentences where the pronoun can be resolved to both antecedents without semantic violations in world knowledge, that means for both readings of the sentence there is a possible consistent world. Nevertheless humans will strongly prefer one answer, which can be explained by pragmatic effects described in Relevance Theory. We state formal optimization criteria based on principles of Relevance Theory in a simplification of Roger Schank's graph framework for natural language understanding. We perform experiments using Answer Set Programming and report the usefulness of our criteria for disambiguation and their sensitivity to parameter variations.", "Combinatory Categorial Grammar (CCG) offers a new approach to the theory of natural language grammar. Coordination, relativization, and related prosodic phenomena have been analyzed in CCG in terms of a radically revised notion of surface structure. CCG surface structures do not exhibit traditional notions of syntactic dominance and command, and do not constitute an autonomous level of representation. Instead, they reflect the computations by which a sentence may be realized or analyzed, to synchronously define a predicate-argument structure, or logical form. Surface Structure and Interpretation shows that binding and control can be captured at this level, preserving the advantages of CCG as an account of coordination and unbounded dependency.The core of the book is a detailed treatment of extraction, a focus of syntactic research since the early work of Chomsky and Ross. The topics addressed include the sources of subject-object asymmetries and phenomena attributed to the Empty Category Principle (ECP), asymmetric islands, parasitic gaps, and the relation of coordination and extraction, including their interactions with binding theory. In his conclusion, the author relates CCG to other categorial and type-driven approaches and to proposals for minimalism in linguistic theory.Linguistic Inquiry Monograph No. 30", "Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks." ] }
1902.00617
2949537635
In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.
Pairwise similarity preserving hashing aligns the similarity over each pair of items computed in the hash codes with the semantic similarity in various manners. Representative algorithms include LDA Hashing @cite_23 , minimal loss hashing @cite_44 , binary reconstructive embedding @cite_27 , supervised hashing with kernels @cite_51 , two-step hashing @cite_40 , FastHash @cite_36 , and so on. The recent work @cite_11 , supervised deep hashing, designs deep neural network as hash functions to seek multiple hierarchical non-linear feature transformations, and preserves the pairwise semantic similarity by maximizing the inter-class variations and minimizing the intra-class variations of the hash codes.
{ "cite_N": [ "@cite_36", "@cite_44", "@cite_40", "@cite_27", "@cite_23", "@cite_51", "@cite_11" ], "mid": [ "2791396492", "2531549126", "1939575207", "2086958058" ], "abstract": [ "Deep supervised hashing has emerged as an influential solution to large-scale semantic image retrieval problems in computer vision. In the light of recent progress, convolutional neural network based hashing methods typically seek pair-wise or triplet labels to conduct the similarity preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar dissimilar labels, which limits the retrieval performance. Generally, pair-wise or triplet losses not only suffer from expensive training costs but also lack in extracting sufficient semantic information. In this regard, we propose a novel deep supervised hashing model to learn more compact class-level similarity preserving binary codes. Our deep learning based model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlap part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on four large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance. Moreover, when training samples are limited, our method surpasses other supervised deep hashing methods with non-negligible margins.", "Hashing techniques have been intensively investigated for large scale vision applications. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised hashing methods only construct similarity-preserving hash codes. Observing that semantic structures carry complementary information, we propose the idea of cotraining for hashing, by jointly learning projections from image representations to hash codes and classification. Specifically, a novel deep semantic-preserving and ranking-based hashing (DSRH) architecture is presented, which consists of three components: a deep CNN for learning image representations, a hash stream of a binary mapping layer by evenly dividing the learnt representations into multiple bags and encoding each bag into one hash bit, and a classification stream. Mean-while, our model is learnt under two constraints at the top loss layer of hash stream: a triplet ranking loss and orthogonality constraint. The former aims to preserve the relative similarity ordering in the triplets, while the latter makes different hash bit as independent as possible. We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-the-art hashing techniques.", "Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.", "Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods." ] }
1902.00617
2949537635
In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.
Multiwise similarity preserving hashing formulates the problem by maximizing the agreement of the similarity orders over more than two items between the input space and the coding space. The representative algorithms include order preserving hashing @cite_1 , which directly aligns the rank orders computed from the input space and the coding space, triplet loss hashing @cite_30 , listwise supervision hashing @cite_3 , and so on. Triplet loss hashing and listwise supervision hashing adopt different loss functions to align the similarity order in the coding space and the semantic similarity over triplets of items. The recent proposed deep semantic ranking based method @cite_38 preserves multilevel semantic similarity between multilabel images by jointly learning feature representations and mappings from them to hash codes.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_1", "@cite_3" ], "mid": [ "2086958058", "1939575207", "2791396492", "1923967535" ], "abstract": [ "Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods.", "Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.", "Deep supervised hashing has emerged as an influential solution to large-scale semantic image retrieval problems in computer vision. In the light of recent progress, convolutional neural network based hashing methods typically seek pair-wise or triplet labels to conduct the similarity preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar dissimilar labels, which limits the retrieval performance. Generally, pair-wise or triplet losses not only suffer from expensive training costs but also lack in extracting sufficient semantic information. In this regard, we propose a novel deep supervised hashing model to learn more compact class-level similarity preserving binary codes. Our deep learning based model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlap part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on four large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance. Moreover, when training samples are limited, our method surpasses other supervised deep hashing methods with non-negligible margins.", "With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multi-level semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets." ] }
1902.00617
2949537635
In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.
The recently-developed supervised discrete hashing (SDH) algorithm @cite_25 formulates the problem using the rule that the classification performance over the learned binary codes is as good as possible. This rule seems inferior compared with pairwise and multiwise similarity preserving, but yields superior search performance. This is mainly thanks to its optimization algorithm (directly optimize the binary codes) and scalability (not necessarily do the sampling as done in most pairwise and multiwise similarity preserving algorithms). Semantic separability in our approach, whose goal is that the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, is formulated as a classification problem, which can also be optimized using all the data points.
{ "cite_N": [ "@cite_25" ], "mid": [ "1956333070", "2473499128", "1910300841", "2791396492" ], "abstract": [ "In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts.", "In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.", "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.", "Deep supervised hashing has emerged as an influential solution to large-scale semantic image retrieval problems in computer vision. In the light of recent progress, convolutional neural network based hashing methods typically seek pair-wise or triplet labels to conduct the similarity preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar dissimilar labels, which limits the retrieval performance. Generally, pair-wise or triplet losses not only suffer from expensive training costs but also lack in extracting sufficient semantic information. In this regard, we propose a novel deep supervised hashing model to learn more compact class-level similarity preserving binary codes. Our deep learning based model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlap part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on four large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance. Moreover, when training samples are limited, our method surpasses other supervised deep hashing methods with non-negligible margins." ] }
1902.00617
2949537635
In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.
Our approach is a supervised version of quantization. The quantizer we adopt is composite quantization @cite_15 , which is shown to be a generalized version of product quantization @cite_39 and cartesian k-means @cite_34 , and achieves better performance. Rather than performing the quantization in the input space, our approach conducts the quantization in a discriminative space, which is jointly learned with the composite quantizer.
{ "cite_N": [ "@cite_15", "@cite_34", "@cite_39" ], "mid": [ "2418260908", "2077815765", "2111006384", "2109053700" ], "abstract": [ "Unlike safety properties which require the absence of a “bad” program trace, k-safety properties stipulate the absence of a “bad” interaction between k traces. Examples of k-safety properties include transitivity, associativity, anti-symmetry, and monotonicity. This paper presents a sound and relatively complete calculus, called Cartesian Hoare Logic (CHL), for verifying k-safety properties. We also present an automated verification algorithm based on CHL and implement it in a tool called DESCARTES. We use DESCARTES to analyze user-defined relational operators in Java and demonstrate that DESCARTES is effective at verifying (or finding violations of) multiple k-safety properties.", "Product quantization (PQ) is an effective vector quantization method. A product quantizer can generate an exponentially large codebook at very low memory time cost. The essence of PQ is to decompose the high-dimensional vector space into the Cartesian product of subspaces and then quantize these subspaces separately. The optimal space decomposition is important for the PQ performance, but still remains an unaddressed issue. In this paper, we optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks. We present two novel solutions to this challenging optimization problem. The first solution iteratively solves two simpler sub-problems. The second solution is based on a Gaussian assumption and provides theoretical analysis of the optimality. We evaluate our optimized product quantizers in three applications: (i) compact encoding for exhaustive ranking [1], (ii) building inverted multi-indexing for non-exhaustive search [2], and (iii) compacting image representations for image retrieval [3]. In all applications our optimized product quantizers outperform existing solutions.", "Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search.", "Consider a pair of correlated Gaussian sources (X 1,X 2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X 1 and X 2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X 1 and X 2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by ldquocorrelatedrdquo lattice-structured binning." ] }
1902.00491
2914691682
We present DANTE, a novel method for training neural networks using the alternating minimization principle. DANTE provides an alternate perspective to traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convexity to cast training a neural network as a bi-quasi-convex optimization problem. We show that for neural network configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE can also be extended to networks with multiple hidden layers. In experiments on standard datasets, neural networks trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of the solution, as well as training speed.
In recent years, Taylor proposed a method to train neural networks using the Alternating Direction Method of Multipliers (ADMM) and Bregman iterations @cite_2 . The focus of this method, however, was on scaling the training of neural networks to a distributed setting on multiple cores across a computing cluster. Jaderberg also proposed the idea of 'synthetic gradients' in @cite_11 . While this approach is interesting, this work is more focused towards a more efficient way to carry out gradient-based parameter updates in a neural network. More recently, Jagatap and Hegde @cite_25 proposed a method to train single hidden layer ReLU networks using an alternating minimization technique. Unlike our method, this method alternates between updating weights, and state variables which indicate which ReLU activations are on, and so is very specific to ReLU activations.
{ "cite_N": [ "@cite_11", "@cite_25", "@cite_2" ], "mid": [ "2809670082", "2346438296", "2894604724", "2796802522" ], "abstract": [ "We propose and analyze a new family of algorithms for training neural networks with ReLU activations. Our algorithms are based on the technique of alternating minimization: estimating the activation patterns of each ReLU for all given samples, interleaved with weight updates via a least-squares step. The main focus of our paper are 1-hidden layer networks with @math hidden neurons and ReLU activation. We show that under standard distributional assumptions on the @math dimensional input data, our algorithm provably recovers the true ground truth' parameters in a linearly convergent fashion. This holds as long as the weights are sufficiently well initialized; furthermore, our method requires only @math samples. We also analyze the special case of 1-hidden layer networks with skipped connections, commonly used in ResNet-type architectures, and propose a novel initialization strategy for the same. For ReLU based ResNet type networks, we provide the first linear convergence guarantee with an end-to-end algorithm. We also extend this framework to deeper networks and empirically demonstrate its convergence to a global minimum.", "With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks. This is largely because conventional optimization algorithms rely on stochastic gradient methods that don't scale well to large numbers of cores in a cluster setting. Furthermore, the convergence of all gradient methods, including batch methods, suffers from common problems like saturation effects, poor conditioning, and saddle points. This paper explores an unconventional training method that uses alternating direction methods and Bregman iteration to train networks without gradient descent steps. The proposed method reduces the network training problem to a sequence of minimization substeps that can each be solved globally in closed form. The proposed method is advantageous because it avoids many of the caveats that make gradient methods slow on highly nonconvex problems. The method exhibits strong scaling in the distributed setting, yielding linear speedups even when split over thousands of cores.", "One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an @math hidden node shallow neural network with ReLU activation and @math training data, we show as long as @math is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods.", "As an integral component of blind image deblurring, non-blind deconvolution removes image blur with a given blur kernel, which is essential but difficult due to the ill-posed nature of the inverse problem. The predominant approach is based on optimization subject to regularization functions that are either manually designed, or learned from examples. Existing learning based methods have shown superior restoration quality but are not practical enough due to their restricted model design. They solely focus on learning a prior and require to know the noise level for deconvolution. We address the gap between the optimization-based and learning-based approaches by learning an optimizer. We propose a Recurrent Gradient Descent Network (RGDN) by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme. A parameter-free update unit is used to generate updates from the current estimates, based on a convolutional neural network. By training on diverse examples, the Recurrent Gradient Descent Network learns an implicit image prior and a universal update rule through recursive supervision. Extensive experiments on synthetic benchmarks and challenging real-world images demonstrate that the proposed method is effective and robust to produce favorable results as well as practical for real-world image deblurring applications." ] }
1902.00491
2914691682
We present DANTE, a novel method for training neural networks using the alternating minimization principle. DANTE provides an alternate perspective to traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convexity to cast training a neural network as a bi-quasi-convex optimization problem. We show that for neural network configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE can also be extended to networks with multiple hidden layers. In experiments on standard datasets, neural networks trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of the solution, as well as training speed.
In our work, we focus on an entirely new approach to training neural networks using alternating optimization and quasi-convexity (different from the abovementioned methods), and show that this approach shows promising results on a range of datasets. Although alternating minimization has found much appeal in areas such as matrix factorization ( @cite_22 ), to the best of our knowledge, this is the one of the early efforts in using alternating principles to train feedforward neural networks effectively.
{ "cite_N": [ "@cite_22" ], "mid": [ "2146989110", "813605148", "2952489973", "2952436057" ], "abstract": [ "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.", "Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation. Here we build on ideas from convex relaxations of matrix factorizations and present a very general framework which allows for the analysis of a wide range of non-convex factorization problems - including matrix factorization, tensor factorization, and deep neural network training formulations. We derive sufficient conditions to guarantee that a local minimum of the non-convex optimization problem is a global minimum and show that if the size of the factorized variables is large enough then from any initialization it is possible to find a global minimizer using a purely local descent algorithm. Our framework also provides a partial theoretical justification for the increasingly common use of Rectified Linear Units (ReLUs) in deep neural networks and offers guidance on deep network architectures and regularization strategies to facilitate efficient optimization.", "Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. @math ; the algorithm then alternates between finding the best @math and the best @math . Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.", "Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks." ] }
1902.00491
2914691682
We present DANTE, a novel method for training neural networks using the alternating minimization principle. DANTE provides an alternate perspective to traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convexity to cast training a neural network as a bi-quasi-convex optimization problem. We show that for neural network configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE can also be extended to networks with multiple hidden layers. In experiments on standard datasets, neural networks trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of the solution, as well as training speed.
Some other efforts have also explored target propagation based methods, such as in @cite_20 , Difference Target Propagation @cite_19 and target propagation in a Bayesian setting @cite_12 . There are also efforts that use random feedback weights such as feedback-alignment @cite_13 and direct indirect feedback-alignment @cite_14 where the weights used for propagation need not be symmetric with the weights used for forward propagation. We however do not focus on credit assignment in this work. However, one could view the proposed method as carrying out implicit' credit assignment using partial derivatives, but there is no defined model for credit assignment in our work.
{ "cite_N": [ "@cite_14", "@cite_19", "@cite_20", "@cite_13", "@cite_12" ], "mid": [ "1855112655", "2964115671", "2139919528", "2949405272" ], "abstract": [ "Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders, called difference target propagation, is very effective to make target propagation actually work, leading to results comparable to back-propagation for deep networks with discrete and continuous units and denoising auto-encoders and achieving state of the art for stochastic networks.", "Artificial neural networks are most commonly trained with the back-propagation algorithm, where the gradient for learning is provided by back-propagating the error, layer by layer, from the output layer to the hidden layers. A recently discovered method called feedback-alignment shows that the weights used for propagating the error backward don't have to be symmetric with the weights used for propagation the activation forward. In fact, random feedback weights work evenly well, because the network learns how to make the feedback useful. In this work, the feedback alignment principle is used for training hidden layers more independently from the rest of the network, and from a zero initial condition. The error is propagated through fixed random feedback connections directly from the output layer to each hidden layer. This simple method is able to achieve zero training error even in convolutional networks and very deep networks, completely without error back-propagation. The method is a step towards biologically plausible machine learning because the error signal is almost local, and no symmetric or reciprocal weights are required. Experiments show that the test performance on MNIST and CIFAR is almost as good as those obtained with back-propagation for fully connected networks. If combined with dropout, the method achieves 1.45 error on the permutation invariant MNIST task.", "Experimental results show that certain message passing algorithms, namely, survey propagation, are very effective in finding satisfying assignments in random satisfiable 3CNF formulas. In this paper we make a modest step towards providing rigorous analysis that proves the effectiveness of message passing algorithms for random 3SAT. We analyze the performance of Warning Propagation, a popular message passing algorithm that is simpler than survey propagation. We show that for 3CNF formulas generated under the planted assignment distribution, running warning propagation in the standard way works when the clause-to-variable ratio is a sufficiently large constant. We are not aware of previous rigorous analysis of message passing algorithms for satisfiability instances, though such analysis was performed for decoding of Low Density Parity Check (LDPC) Codes. We discuss some of the differences between results for the LDPC setting and our results.", "Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes. This \"weight transport problem\" (Grossberg 1987) is thought to be one of the main reasons to doubt BP's biologically plausibility. Using 15 different classification datasets, we systematically investigate to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to 's demonstration ( 2014) but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter -- the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100 concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and or a \"Batch Manhattan\" (BM) update rule." ] }
1907.13615
2965843368
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed humans and thus do not capture the complexity of dressed humans in common images and videos. To address this, we learn a generative 3D mesh model of clothing from 3D scans of people with varying pose. Going beyond previous work, our generative model is conditioned on different clothing types, giving the ability to dress different body shapes in a variety of clothing. To do so, we train a conditional Mesh-VAE-GAN on clothing displacements from a 3D SMPL body model. This generative clothing model enables us to sample various types of clothing, in novel poses, on top of SMPL. With a focus on clothing geometry, the model captures both global shape and local structure, effectively extending the SMPL model to add clothing. To our knowledge, this is the first conditional VAE-GAN that works on 3D meshes. For clothing specifically, it is the first such model that directly dresses 3D human body meshes and generalizes to different poses.
Models for clothing and clothed humans. There is a large literature on clothing simulation @cite_12 , which is beyond our scope. Traditional physics simulation requires significant artist or designer involvement and is not practical in the inner loop of a deep learning framework. The most relevant work has focused on computing off-line physics-based simulations and learning efficient data-driven approximations from them @cite_9 @cite_13 @cite_38 @cite_44 @cite_52 @cite_14 . DRAPE @cite_9 learns a model of clothing that allows changing the pose and shape, and Wang al @cite_52 allow manipulation of clothing with sketches. Learning from physics simulation is convenient because many examples can be synthesized, they do not have noise, and registration and the factorization of shape into pose, body shape and clothing is already given. However, models learned from synthetic data often look unrealistic @cite_9 . Learning from real scans of people in motion is difficult, but opens the door to model more realism and diversity.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_9", "@cite_52", "@cite_44", "@cite_13", "@cite_12" ], "mid": [ "1967494143", "1978216348", "2920928264", "2886416285" ], "abstract": [ "We present a technique for learning clothing models that enables the simultaneous animation of thousands of detailed garments in real-time. This surprisingly simple conditional model learns and preserves the key dynamic properties of a cloth motion along with folding details. Our approach requires no a priori physical model, but rather treats training data as a \"black box.\" We show that the models learned with our method are stable over large time-steps and can approximately resolve cloth-body collisions. We also show that within a class of methods, no simpler model covers the full range of cloth dynamics captured by ours. Our method bridges the current gap between skinning and physical simulation, combining benefits of speed from the former with dynamic effects from the latter. We demonstrate our approach on a variety of apparel worn by male and female human characters performing a varied set of motions typically used in video games (e.g., walking, running, jumping, etc.).", "We describe a complete system for animating realistic clothing on synthetic bodies of any shape and pose without manual intervention. The key component of the method is a model of clothing called DRAPE (DRessing Any PErson) that is learned from a physics-based simulation of clothing on bodies of different shapes and poses. The DRAPE model has the desirable property of \"factoring\" clothing deformations due to body shape from those due to pose variation. This factorization provides an approximation to the physical clothing deformation and greatly simplifies clothing synthesis. Given a parameterized model of the human body with known shape and pose parameters, we describe an algorithm that dresses the body with a garment that is customized to fit and possesses realistic wrinkles. DRAPE can be used to dress static bodies or animated sequences with a learned model of the cloth dynamics. Since the method is fully automated, it is appropriate for dressing large numbers of virtual characters of varying shape. The method is significantly more efficient than physical simulation.", "This paper presents a learning-based clothing animation method for highly efficient virtual try-on simulation. Given a garment, we preprocess a rich database of physically-based dressed character simulations, for multiple body shapes and animations. Then, using this database, we train a learning-based model of cloth drape and wrinkles, as a function of body shape and dynamics. We propose a model that separates global garment fit, due to body shape, from local garment wrinkles, due to both pose dynamics and body shape. We use a recurrent neural network to regress garment wrinkles, and we achieve highly plausible nonlinear effects, in contrast to the blending artifacts suffered by previous methods. At runtime, dynamic virtual try-on animations are produced in just a few milliseconds for garments with thousands of triangles. We show qualitative and quantitative analysis of results", "We present a novel method to generate accurate and realistic clothing deformation from real data capture. Previous methods for realistic cloth modeling mainly rely on intensive computation of physics-based simulation (with numerous heuristic parameters), while models reconstructed from visual observations typically suffer from lack of geometric details. Here, we propose an original framework consisting of two modules that work jointly to represent global shape deformation as well as surface details with high fidelity. Global shape deformations are recovered from a subspace model learned from 3D data of clothed people in motion, while high frequency details are added to normal maps created using a conditional Generative Adversarial Network whose architecture is designed to enforce realism and temporal consistency. This leads to unprecedented high-quality rendering of clothing deformation sequences, where fine wrinkles from (real) high resolution observations can be recovered. In addition, as the model is learned independently from body shape and pose, the framework is suitable for applications that require retargeting (e.g., body animation). Our experiments show original high quality results with a flexible model. We claim an entirely data-driven approach to realistic cloth wrinkle generation is possible." ] }
1907.13615
2965843368
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed humans and thus do not capture the complexity of dressed humans in common images and videos. To address this, we learn a generative 3D mesh model of clothing from 3D scans of people with varying pose. Going beyond previous work, our generative model is conditioned on different clothing types, giving the ability to dress different body shapes in a variety of clothing. To do so, we train a conditional Mesh-VAE-GAN on clothing displacements from a 3D SMPL body model. This generative clothing model enables us to sample various types of clothing, in novel poses, on top of SMPL. With a focus on clothing geometry, the model captures both global shape and local structure, effectively extending the SMPL model to add clothing. To our knowledge, this is the first conditional VAE-GAN that works on 3D meshes. For clothing specifically, it is the first such model that directly dresses 3D human body meshes and generalizes to different poses.
The first problem with real scans is to estimate the shape and pose clothing, which is required to model how garments deviate from the body. Typically, a single shape is optimized over multiple poses to fit inside multiple clothing silhouettes @cite_5 , or a sequence of 3D scans @cite_65 @cite_31 . Here, we use a similar approach @cite_65 to factor a scan into the minimally clothed shape and a clothing layer. Pons-Moll al @cite_40 proposed ClothCap, a method to capture a sequence of dynamic scans, encoding clothing as displacement from the SMPL body, and retargeting it to new body shapes. Similar to ClothCap, Alldieck al @cite_73 @cite_41 represent clothing as an offset from the SMPL to reconstruct people in clothing from images. Combining dynamic fusion ideas @cite_48 with SMPL, cloth capture and retargeting has been demonstrated from a single depth camera @cite_55 @cite_63 . Still other work captures garments in isolation from multi-view @cite_42 or single images using a CNN @cite_21 . These are essentially and can not generate novel clothing. While a model of clothing is implicitly learned in @cite_26 from scans, it requires images as input.
{ "cite_N": [ "@cite_26", "@cite_41", "@cite_48", "@cite_55", "@cite_42", "@cite_65", "@cite_21", "@cite_40", "@cite_63", "@cite_5", "@cite_31", "@cite_73" ], "mid": [ "2737762407", "2883309205", "2109683418", "2607760177" ], "abstract": [ "Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the minimally clothed body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. ClothCap is able to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes; this provides a step towards virtual try-on.", "Image-based virtual try-on systems for fitting a new in-shop clothes into a person image have attracted increasing research attention, yet is still challenging. A desirable pipeline should not only transform the target clothes into the most fitting shape seamlessly but also preserve well the clothes identity in the generated image, that is, the key characteristics (e.g. texture, logo, embroidery) that depict the original clothes. However, previous image-conditioned generation works fail to meet these critical requirements towards the plausible virtual try-on performance since they fail to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. In this work, we propose a new fully-learnable Characteristic-Preserving Virtual Try-On Network (CP-VTON) for addressing all real-world challenges in this task. First, CP-VTON learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a new Geometric Matching Module (GMM) rather than computing correspondences of interest points as prior works did. Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered image to ensure smoothness. Extensive experiments on a fashion dataset demonstrate our CP-VTON achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.", "We present a new performance capture approach that incorporates a physically-based cloth model to reconstruct a rigged fully-animatable virtual double of a real person in loose apparel from multi-view video recordings. Our algorithm only requires a minimum of manual interaction. Without the use of optical markers in the scene, our algorithm first reconstructs skeleton motion and detailed time-varying surface geometry of a real person from a reference video sequence. These captured reference performance data are then analyzed to automatically identify non-rigidly deforming pieces of apparel on the animated geometry. For each piece of apparel, parameters of a physically-based real-time cloth simulation model are estimated, and surface geometry of occluded body regions is approximated. The reconstructed character model comprises a skeleton-based representation for the actual body parts and a physically-based simulation model for the apparel. In contrast to previous performance capture methods, we can now also create new real-time animations of actors captured in general apparel.", "3D garment capture is an important component for various applications such as free-view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image-based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run-times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks CNN-s to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos." ] }
1907.13615
2965843368
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed humans and thus do not capture the complexity of dressed humans in common images and videos. To address this, we learn a generative 3D mesh model of clothing from 3D scans of people with varying pose. Going beyond previous work, our generative model is conditioned on different clothing types, giving the ability to dress different body shapes in a variety of clothing. To do so, we train a conditional Mesh-VAE-GAN on clothing displacements from a 3D SMPL body model. This generative clothing model enables us to sample various types of clothing, in novel poses, on top of SMPL. With a focus on clothing geometry, the model captures both global shape and local structure, effectively extending the SMPL model to add clothing. To our knowledge, this is the first conditional VAE-GAN that works on 3D meshes. For clothing specifically, it is the first such model that directly dresses 3D human body meshes and generalizes to different poses.
Few clothing models, learned from real data, have shown that they generalize to new poses. Neophytou and Hilton @cite_8 learn a layered garment model on top of SCAPE @cite_53 from dynamic sequences, but generalization to novel poses is not demonstrated. Yang al @cite_34 train a neural network to a PCA-based representation of clothing, but generalization is only shown on the same sequence or on the same subject. L "a hner al @cite_0 learn a garment-specific pose deformation model by low-frequency PCA components, and high frequency normal maps. While the achieved quality is good, the model is garment specific and does not provide a solution for full-body clothing. Most importantly, the aforementioned models are and produce single point estimates. In contrast, our model is , which allows us to clothing. Our motivation for learning a generative model is that clothing shape is intrinsically probabilistic; conditioned on a single pose, multiple clothing deformations are possible. A conceptually different approach to ours is to infer the parameters of a physical model from 3D scan sequences @cite_33 , and show generalization to novel poses. However, the inference problem is difficult, and unlike our model, the resulting physics simulator is not differentiable.
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_53", "@cite_0", "@cite_34" ], "mid": [ "2757508077", "2964318046", "2886416285", "2136393262" ], "abstract": [ "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model \"redresses\" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN .", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model “redresses” the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer’s body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer’s pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted.", "We present a novel method to generate accurate and realistic clothing deformation from real data capture. Previous methods for realistic cloth modeling mainly rely on intensive computation of physics-based simulation (with numerous heuristic parameters), while models reconstructed from visual observations typically suffer from lack of geometric details. Here, we propose an original framework consisting of two modules that work jointly to represent global shape deformation as well as surface details with high fidelity. Global shape deformations are recovered from a subspace model learned from 3D data of clothed people in motion, while high frequency details are added to normal maps created using a conditional Generative Adversarial Network whose architecture is designed to enforce realism and temporal consistency. This leads to unprecedented high-quality rendering of clothing deformation sequences, where fine wrinkles from (real) high resolution observations can be recovered. In addition, as the model is learned independently from body shape and pose, the framework is suitable for applications that require retargeting (e.g., body animation). Our experiments show original high quality results with a flexible model. We claim an entirely data-driven approach to realistic cloth wrinkle generation is possible.", "In this paper, we present a two-level generative model for representing the images and surface depth maps of drapery and clothes. The upper level consists of a number of folds which will generate the high contrast (ridge) areas with a dictionary of shading primitives (for 2D images) and fold primitives (for 3D depth maps). These primitives are represented in parametric forms and are learned in a supervised learning phase using 3D surfaces of clothes acquired through photometric stereo. The lower level consists of the remaining flat areas which fill between the folds with a smoothness prior (Markov random field). We show that the classical ill-posed problem-shape from shading (SFS) can be much improved by this two-level model for its reduced dimensionality and incorporation of middle-level visual knowledge, i.e., the dictionary of primitives. Given an input image, we first infer the folds and compute a sketch graph using a sketch pursuit algorithm as in the primal sketch (, 2003). The 3D folds are estimated by parameter fitting using the fold dictionary and they form the \"skeleton\" of the drapery cloth surfaces. Then, the lower level is computed by conventional SFS method using the fold areas as boundary conditions. The two levels interact at the final stage by optimizing a joint Bayesian posterior probability on the depth map. We show a number of experiments which demonstrate more robust results in comparison with state-of-the-art work. In a broader scope, our representation can be viewed as a two-level inhomogeneous MRF model which is applicable to general shape-from-X problems. Our study is an attempt to revisit Marr's idea (Marr and Freeman, 1982) of computing the 2frac12D sketch from primal sketch. In a companion paper (Barbu and Zhu, 2005), we study shape from stereo based on a similar two-level generative sketch representation." ] }
1907.13615
2965843368
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed humans and thus do not capture the complexity of dressed humans in common images and videos. To address this, we learn a generative 3D mesh model of clothing from 3D scans of people with varying pose. Going beyond previous work, our generative model is conditioned on different clothing types, giving the ability to dress different body shapes in a variety of clothing. To do so, we train a conditional Mesh-VAE-GAN on clothing displacements from a 3D SMPL body model. This generative clothing model enables us to sample various types of clothing, in novel poses, on top of SMPL. With a focus on clothing geometry, the model captures both global shape and local structure, effectively extending the SMPL model to add clothing. To our knowledge, this is the first conditional VAE-GAN that works on 3D meshes. For clothing specifically, it is the first such model that directly dresses 3D human body meshes and generalizes to different poses.
Generative models for 3D meshes. Generative models for 3D shapes are usually based on PCA @cite_35 or its robust versions @cite_43 . Alternately, deep learning methods such as Variational Autoencoders (VAE) @cite_54 and Generative Adversarial Networks (GAN) @cite_66 have shown state of the art results in generating 2D images @cite_32 and voxels @cite_72 . However, a voxel representation is not well suited to modeling clothing surfaces. Compared to voxels @cite_45 @cite_56 @cite_67 @cite_39 and point clouds @cite_7 @cite_60 @cite_22 @cite_24 @cite_49 , meshes are more suitable for 3D clothing data because of their computational efficiency and flexibility in modeling both global and local information. Generalizing GANs to irregular structures, such as graphs and meshes, however, is not trivial.
{ "cite_N": [ "@cite_35", "@cite_67", "@cite_22", "@cite_7", "@cite_60", "@cite_54", "@cite_32", "@cite_56", "@cite_39", "@cite_43", "@cite_24", "@cite_72", "@cite_45", "@cite_49", "@cite_66" ], "mid": [ "2787366651", "2753652460", "2963735494", "648143168" ], "abstract": [ "In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.", "In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: this https URL", "In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: https: github.com Yang7879 3D-RecGAN.", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach [11]. Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset." ] }
1907.13615
2965843368
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed humans and thus do not capture the complexity of dressed humans in common images and videos. To address this, we learn a generative 3D mesh model of clothing from 3D scans of people with varying pose. Going beyond previous work, our generative model is conditioned on different clothing types, giving the ability to dress different body shapes in a variety of clothing. To do so, we train a conditional Mesh-VAE-GAN on clothing displacements from a 3D SMPL body model. This generative clothing model enables us to sample various types of clothing, in novel poses, on top of SMPL. With a focus on clothing geometry, the model captures both global shape and local structure, effectively extending the SMPL model to add clothing. To our knowledge, this is the first conditional VAE-GAN that works on 3D meshes. For clothing specifically, it is the first such model that directly dresses 3D human body meshes and generalizes to different poses.
To deal with irregular structures like graphs, Bruna al @cite_61 introduce graph convolutions. Follow-up work @cite_50 @cite_28 extends these graph convolutions, which have been successfully used @cite_18 @cite_19 to learn representations defined on meshes. Verma al @cite_19 use feature-steered graph-convolutions for 3D shape analysis. Based on this, Litany al @cite_68 use mesh VAEs to do mesh completion. Ranjan al @cite_18 learn a convolutional mesh-VAE using graph convolutions @cite_50 with mesh down- and up-sampling layers @cite_36 . Although it works well for faces, the mesh sampling layer makes it difficult to capture the high frequency wrinkles, which are key in clothing. In our work, we capture high frequency wrinkles by extending the PatchGAN @cite_70 architecture to handle 3D meshes.
{ "cite_N": [ "@cite_61", "@cite_18", "@cite_28", "@cite_36", "@cite_70", "@cite_19", "@cite_50", "@cite_68" ], "mid": [ "2900731076", "2770717124", "2601789736", "2785325870" ], "abstract": [ "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and density functions through kernel density estimation. The most important contribution of this work is a novel reformulation proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.", "This paper introduces a generalization of Convolutional Neural Networks (CNNs) to graphs with irregular linkage structures, especially heterogeneous graphs with typed nodes and schemas. We propose a novel spatial convolution operation to model the key properties of local connectivity and translation invariance, using high-order connection patterns or motifs. We develop a novel deep architecture Motif-CNN that employs an attention model to combine the features extracted from multiple patterns, thus effectively capturing high-order structural and feature information. Our experiments on semi-supervised node classification on real-world social networks and multiple representative heterogeneous graph datasets indicate significant gains of 6-21 over existing graph CNNs and other state-of-the-art techniques.", "By stacking layers of convolution and nonlinearity, convolutional networks (ConvNets) effectively learn from lowlevel to high-level features and discriminative representations. Since the end goal of large-scale recognition is to delineate complex boundaries of thousands of classes, adequate exploration of feature distributions is important for realizing full potentials of ConvNets. However, state-of-theart works concentrate only on deeper or wider architecture design, while rarely exploring feature statistics higher than first-order. We take a step towards addressing this problem. Our method consists in covariance pooling, instead of the most commonly used first-order pooling, of highlevel convolutional features. The main challenges involved are robust covariance estimation given a small sample of large-dimensional features and usage of the manifold structure of covariance matrices. To address these challenges, we present a Matrix Power Normalized Covariance (MPNCOV) method. We develop forward and backward propagation formulas regarding the nonlinear matrix functions such that MPN-COV can be trained end-to-end. In addition, we analyze both qualitatively and quantitatively its advantage over the well-known Log-Euclidean metric. On the ImageNet 2012 validation set, by combining MPN-COV we achieve over 4 , 3 and 2.5 gains for AlexNet, VGG-M and VGG-16, respectively; integration of MPN-COV into 50-layer ResNet outperforms ResNet-101 and is comparable to ResNet-152. The source code will be available on the project page: http: www.peihuali.org MPN-COV.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ." ] }