aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
Multiple analyses were performed for the CSMA CD protocol (CSMA with collision detection), a predecessor of CSMA CA that has a constant backoff, i.e. the backoff time is not increased exponentially, see @cite_12 @cite_3 @cite_24 @cite_5 @cite_8 . In all these approaches frame collisions have to be modelled explicitly, as part of the protocol description. In contrast, our approach handles collisions in the semantics; thereby achieving a clear separation between protocol specifications and link layer behaviour.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_24", "@cite_5", "@cite_12" ], "mid": [ "91133127", "2546429222", "1554767603", "2345573725", "1988873236" ], "abstract": [ "", "Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques.", "Reachability analysis for timed automata can be done by enumeration of time zones, which are conjunctions of atomic formulas of the form x-y≤(<)n. This paper shows that some of the atomic formulas in a generated time zone can be removed while the reachability analysis algorithm generates the same set of reachable locations. We call such formulas irrelevant ones. By removing the irrelevant formulas, the number of symbolic states associated with each location is reduced. We present two methods to detect irrelevant formulas. Case studies show that, for some kind of timed automata, these methods may significantly reduce the space requirement for reachability analysis.", "This paper compares the tools SPIN and UPPAAL by modelling and verifying a Collision Avoidance Protocol for an Ethernet-like medium. We find that SPIN is well suited for modelling the untimed aspects of the protocol processes and for expressing the relevant (untimed) properties. However, the modelling of the media becomes awkward due to the lack of broadcast communication in the PROMELA language. On the other hand we find it easy to model the timed aspects using the UPPAAL tool. Especially, the notion of committed locations supports the modelling of broadcast communication. However, the property language of UPPAAL lacks some expressivity for verification of bounded liveness properties, and we indicate how timed testing automata may be constructed for such properties, inspired by the (untimed) checking automata of SPIN.", "Carrier Sense Multiple Access Collision Detection (CSMA CD) is the protocol for carrier transmission access in Ethernet networks (international standard IEEE 802.3). On Ethernet, any Network Interface Card (NIC) can try to send a packet in a channel at any time. If another NIC tries to send a packet at the same time, a collision is said to occur and the packets are discarded. The CSMA CD protocol was designed to avoid this problem, more precisely to allow a NIC to send its packet without collision. This is done by way of a randomized exponential backoff process. In this paper, we analyse the correctness of the CSMA CD protocol, using techniques from probabilistic model checking and approximate probabilistic model checking. The tools that we use are PRISM and APMC. Moreover, we provide a quantitative analysis of some CSMA CD properties." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
@cite_12 @cite_3 use probabilistic timed automata (PTAs) to model the protocol, and use probabilistic model checking ( ) and approximate model checking ( ) for their analysis. The model explained in @cite_24 is based on PTAs as well, but uses the model checker as verification tool. These approaches, although formal, have very little in common with our approach. On the one hand it is not easy to change the model from CSMA CD to CSMA CA, as the latter requires unbounded data structures (or alike) to model the exponential backoff. On the other hand, as usual, model checking suffers from state space explosion and only small networks (usually fewer than ten nodes) can be analysed. This is sufficient and convenient when it comes to finding counter examples, but these approaches cannot provide guarantees for arbitrary network topologies, as ours does.
{ "cite_N": [ "@cite_24", "@cite_3", "@cite_12" ], "mid": [ "1554767603", "2546429222", "1988873236" ], "abstract": [ "Reachability analysis for timed automata can be done by enumeration of time zones, which are conjunctions of atomic formulas of the form x-y≤(<)n. This paper shows that some of the atomic formulas in a generated time zone can be removed while the reachability analysis algorithm generates the same set of reachable locations. We call such formulas irrelevant ones. By removing the irrelevant formulas, the number of symbolic states associated with each location is reduced. We present two methods to detect irrelevant formulas. Case studies show that, for some kind of timed automata, these methods may significantly reduce the space requirement for reachability analysis.", "Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques.", "Carrier Sense Multiple Access Collision Detection (CSMA CD) is the protocol for carrier transmission access in Ethernet networks (international standard IEEE 802.3). On Ethernet, any Network Interface Card (NIC) can try to send a packet in a channel at any time. If another NIC tries to send a packet at the same time, a collision is said to occur and the packets are discarded. The CSMA CD protocol was designed to avoid this problem, more precisely to allow a NIC to send its packet without collision. This is done by way of a randomized exponential backoff process. In this paper, we analyse the correctness of the CSMA CD protocol, using techniques from probabilistic model checking and approximate probabilistic model checking. The tools that we use are PRISM and APMC. Moreover, we provide a quantitative analysis of some CSMA CD properties." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
@cite_5 use models of CSMA CD to compare the tools SPIN and . Their models are much more abstract than ours. It is proven that no collisions will ever occur, without stating the exact conditions under which this statement holds.
{ "cite_N": [ "@cite_5" ], "mid": [ "2345573725" ], "abstract": [ "This paper compares the tools SPIN and UPPAAL by modelling and verifying a Collision Avoidance Protocol for an Ethernet-like medium. We find that SPIN is well suited for modelling the untimed aspects of the protocol processes and for expressing the relevant (untimed) properties. However, the modelling of the media becomes awkward due to the lack of broadcast communication in the PROMELA language. On the other hand we find it easy to model the timed aspects using the UPPAAL tool. Especially, the notion of committed locations supports the modelling of broadcast communication. However, the property language of UPPAAL lacks some expressivity for verification of bounded liveness properties, and we indicate how timed testing automata may be constructed for such properties, inspired by the (untimed) checking automata of SPIN." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
There are far fewer formal analyses techniques available when it comes to CSMA CA (with and without virtual medium sensing). Traditional approaches to the analysis of network protocols are simulation and test-bed experiments. This is also the case for CSMA CA (e.g. @cite_11 ). While these are important and valid methods for protocol evaluation, in particular for quantitative performance evaluation, they have limitations in regards to the evaluation of basic protocol correctness properties.
{ "cite_N": [ "@cite_11" ], "mid": [ "1673008520" ], "abstract": [ "To satisfy the needs of wireless data networking, study group 802.11 was formed under IEEE project 802 to recommend an international standard for Wireless Local Area Networks (WLANs). A key part of standard are the Medium Access Control (MAC) protocol needed to support asynchronous and time bounded delivery of data frames. It has been proposed that unslotted Carrier Sense Multiple Access with Collision Avoidance (CSMA CA) be the basis for the IEEE 802.11 WLAN MAC protocols. We conduct performance evaluation of the asynchronous data transfer protocols that are a part of the proposed IEEE 802.11 standard taking into account the decentralized nature of communication between stations, the possibility of “capture”, and presence of “hidden” stations. We compute system throughput and evaluate fairness properties of the proposed MAC protocols. Further, the impact of spatial characteristics on the performance of the system and that observed by individual stations is determined. A comprehensive comparison of the access methods provided by the 802.11 MAC protocol is done and observations are made as to when each should be employed. Extensive numerical and simulation results are presented to help understand the issues involved." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
-3pt Following the spirit of the above-mentioned research of model checking CSMA, Fruth @cite_18 analyses CSMA CA using PTAs and . He considers properties such as the minimum probability of two nodes successfully completing their transmissions, and maximum expected number of collisions until two nodes have successfully completed their transmissions. As before, this analysis technique does not scale; in @cite_18 the experiments are limited to two contending nodes only.
{ "cite_N": [ "@cite_18" ], "mid": [ "2137013587" ], "abstract": [ "The international standard IEEE 802.15.4 defines low-rate wireless personal area networks, a central communication infrastructure of pervasive computing. In order to avoid conflicts caused by multiple devices transmitting at the same time, it uses a contention resolution algorithm based on randomised exponential backoff that is similar to the ones used in IEEE 802.3 for Ethernet and IEEE 802.11 for Wireless LAN. We model the protocol using probabilistic timed automata, a formalism in which both nondeterministic and probabilistic choice can be represented. The probabilistic timed automaton is transformed into a finite-state Markov decision process via a property-preserving integral-time semantics. Using the probabilistic model checker PRISM, we verify correctness properties, compare different operation modes of the protocol, and analyse performance and accuracy of different model abstractions." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
Beyond model checking, simulation and test-bed experiments, we are only aware of two other formal approaches. In @cite_4 Markov chains are used to derive an accurate, analytical model to compute the throughput of CSMA CA. Calculating throughput is an orthogonal task to our vision of proving (functional) correctness.
{ "cite_N": [ "@cite_4" ], "mid": [ "2162598825" ], "abstract": [ "The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol." ] }
1907.13329
2925427079
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
An approach aiming at proving the correctness of CSMA CA with virtual carrier sensing ( ), and hence related to ours, is presented in @cite_7 . Based on stochastic bigraphs with sharing it uses rewrite rules to analyse quantitative properties. Although it is an approach that is capable to analyse arbitrary topologies, to apply the rewrite rules a particular topology needs to be modelled by a directed acyclic graph structure, which is part of the bigraph.
{ "cite_N": [ "@cite_7" ], "mid": [ "2049318064" ], "abstract": [ "Stochastic bigraphical reactive systems (SBRS) is a recent formalism for modelling systems that evolve in time and space. However, the underlying spatial model is based on sets of trees and thus cannot represent spatial locations that are shared among several entities in a simple or intuitive way. We adopt an extension of the formalism, SBRS with sharing, in which the topology is modelled by a directed acyclic graph structure. We give an overview of SBRS with sharing, we extend it with rule priorities, and then use it to develop a model of the 802.11 CSMA CA RTS CTS protocol with exponential backoff, for an arbitrary network topology with possibly overlapping signals. The model uses sharing to model overlapping connectedness areas, instantaneous prioritised rules for deterministic computations, and stochastic rules with exponential reaction rates to model constant and uniformly distributed timeouts and constant transmission times. Equivalence classes of model states modulo instantaneous reactions yield states in a CTMC that can be analysed using the model checker PRISM. We illustrate the model on a simple example wireless network with three overlapping signals and we present some example quantitative properties." ] }
1907.13359
2966761695
Deep learning algorithms have achieved excellent performance lately in a wide range of fields (e.g., computer version). However, a severe challenge faced by deep learning is the high dependency on hyper-parameters. The algorithm results may fluctuate dramatically under the different configuration of hyper-parameters. Addressing the above issue, this paper presents an efficient Orthogonal Array Tuning Method (OATM) for deep learning hyper-parameter tuning. We describe the OATM approach in five detailed steps and elaborate on it using two widely used deep neural network structures (Recurrent Neural Networks and Convolutional Neural Networks). The proposed method is compared to the state-of-the-art hyper-parameter tuning methods including manually (e.g., grid search and random search) and automatically (e.g., Bayesian Optimization) ones. The experiment results state that OATM can significantly save the tuning time compared to the state-of-the-art methods while preserving the satisfying performance.
Apart from the aforementioned methods, the orthogonal array based hyper-parameter tuning already used in a range of research areas such as mechanical engineering and electrical engineering. J.A @cite_6 applied orthogonal array based approach to optimize the cutting parameters in the end milling. S.S. @cite_10 optimized wire electrical discharge machining (WEDM) process parameters by orthogonal array method. The traditional methods are not suited for deep learning algorithms while the effectiveness of OATM has been demonstrated in many research topics. Intuitively, we adopt OATM for deep learning hyper-parameter tuning. To our best knowledge, our work is the first batch of studies in this area.
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2075314456", "2084841915" ], "abstract": [ "Wire electrical discharge machining (WEDM) is extensively used in machining of conductive materials when precision is of prime importance. Rough cutting operation in WEDM is treated as a challenging one because improvement of more than one machining performance measures viz. met al removal rate (MRR), surface finish (SF) and cutting width (kerf) are sought to obtain a precision work. Using Taguchi’s parameter design, significant machining parameters affecting the performance measures are identified as discharge current, pulse duration, pulse frequency, wire speed, wire tension, and dielectric flow. It has been observed that a combination of factors for optimization of each performance measure is different. In this study, the relationship between control factors and responses like MRR, SF and kerf are established by means of nonlinear regression analysis, resulting in a valid mathematical model. Finally, genetic algorithm, a popular evolutionary approach, is employed to optimize the wire electrical discharge machining process with multiple objectives. The study demonstrates that the WEDM process parameters can be adjusted to achieve better met al removal rate, surface finish and cutting width simultaneously.", "Abstract In this study, the Taguchi method is used to find the optimal cutting parameters for surface roughness in turning. The orthogonal array, the signal-to-noise ratio, and analysis of variance are employed to study the performance characteristics in turning operations of AISI 1030 steel bars using TiN coated tools. Three cutting parameters namely, insert radius, feed rate, and depth of cut, are optimized with considerations of surface roughness. Experimental results are provided to illustrate the effectiveness of this approach." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
As early as the 1980s, low-precision arithmetic has been explored in shallow neural networks to decrease both compute and memory complexity for training and inference without deteriorating performance @cite_21 @cite_3 @cite_14 @cite_12 . In some scenarios, this bit-precision constraint also improves DNN performance due to the quantization noise acting as a regularization method @cite_17 @cite_22 . The outcome of these studies indicate that 16- and 8-bit precision DNN parameters are capable of satisfactorily maintaining performance for both training and inference in shallow networks @cite_3 @cite_14 @cite_17 . The capability of low-precision arithmetic is reevaluated in the deep learning era to reduce memory footprint and energy consumption during training and inference @cite_25 @cite_26 @cite_11 @cite_30 @cite_7 @cite_4 @cite_15 @cite_0 @cite_29 @cite_6 @cite_31 @cite_35 @cite_9 @cite_19 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_22", "@cite_29", "@cite_3", "@cite_15", "@cite_4", "@cite_21", "@cite_17", "@cite_26", "@cite_7", "@cite_6", "@cite_19", "@cite_25", "@cite_12", "@cite_14", "@cite_9", "@cite_0", "@cite_31", "@cite_11" ], "mid": [ "2962786581", "2924943819", "2111406701", "2563860341", "2075140461", "2947629474", "2946955515", "2096447758", "", "2963374099", "2889797931", "2798956872", "2899063892", "2114508814", "", "1981071268", "2963974650", "2793950911", "2963711383", "2963112338" ], "abstract": [ "Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.", "Deep neural networks (DNNs) have been demonstrated as effective prognostic models across various domains, e.g. natural language processing, computer vision, and genomics. However, modern-day DNNs demand high compute and memory storage for executing any reasonably complex task. To optimize the inference time and alleviate the power consumption of these networks, DNN accelerators with low-precision representations of data and DNN parameters are being actively studied. An interesting research question is in how low-precision networks can be ported to edge-devices with similar performance as high-precision networks. In this work, we employ the fixed-point, floating point, and posit numerical formats at ≤8-bit precision within a DNN accelerator, Deep Positron, with exact multiply-and-accumulate (EMAC) units for inference. A unified analysis quantifies the trade-offs between overall network efficiency and performance across five classification tasks. Our results indicate that posits are a natural fit for DNN inference, outperforming at ≤8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.", "It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to difficulties if used directly in a learning algorithm based on error minimization. In this paper we show that for the purposes of network training, the regularization term can be reduced to a positive semi-definite form that involves only first derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise.", "Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters. In this work, we address this issue and perform a study of different bit-precisions in neural networks (from floating-point to fixed-point, powers of two, and binary). In our evaluation, we consider and analyze the effect of precision scaling on both network accuracy and hardware metrics including memory footprint, power and energy consumption, and design area. We also investigate training-time methodologies to compensate for the reduction in accuracy due to limited bit precision and demonstrate that in most cases, precision scaling can deliver significant benefits in design metrics at the cost of very modest decreases in network accuracy. In addition, we propose that a small portion of the benefits achieved when using lower precisions can be forfeited to increase the network size and therefore the accuracy. We evaluate our experiments, using three well-recognized networks and datasets to show its generality. We investigate the trade-offs and highlight the benefits of using lower precisions in terms of energy and memory footprint.", "An artificial neural network (ANN) accelerator named Neuro Turbo was implemented using four recently developed general-purpose 24-b floating-point digital signal processors (DSP) MB86220. The Neuro Turbo is a MIMD (multiple-instruction, multiple-data) parallel processor having four ring-coupled DSPs and four dual-port memories (DPM). It is designed compactly to plug into the extender slots of the NEC personal computer PC98 series. The performance was evaluated by constructing a neural network to recognize the 26 type fonts of the alphabet set. Processing speeds of 2 MCPS (million connections per second) for the learning procedure and 11 MCPS for the forward pass were achieved. >", "This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.", "Reduced precision computation for deep neural networks is one of the key areas addressing the widening compute gap driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18 34 50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.", "The authors describe a complementary met al-oxide-semiconductor (CMOS) very-large-scale integrated (VLSI) circuit implementing a connectionist neural-network model. It consists of an array of 54 simple processors fully interconnected with a programmable connection matrix. This experimental design tests the behavior of a large network of processors integrated on a chip. The circuit can be operated in several different configurations by programming the interconnections between the processors. Tests made with the circuit working as an associative memory and as a pattern classifier were so encouraging that the chip has been interfaced to a minicomputer and is being used as a coprocessor in pattern-recognition experiments. This mode of operation is making it possible to test the chip's behavior in a real application and study how pattern-recognition algorithms can be mapped in such a network. >", "", "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of lowprecision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.", "The state-of-the-art hardware platforms for training deep neural networks are moving from traditional single precision (32-bit) computations towards 16 bits of precision - in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. However, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. Here we demonstrate, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. In addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. The use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4 times improved throughput over today's systems.", "To meet the computational demands required of deep learning, cloud operators are turning toward specialized hardware for improved efficiency and performance. Project Brainwave, Microsofts principal infrastructure for AI serving in real time, accelerates deep neural network (DNN) inferencing in major services such as Bings intelligent search features and Azure. Exploiting distributed model parallelism and pinning over low-latency hardware microservices, Project Brainwave serves state-of-the-art, pre-trained DNN models with high efficiencies at low batch sizes. A high-performance, precision-adaptable FPGA soft processor is at the heart of the system, achieving up to 39.5 teraflops (Tflops) of effective performance at Batch 1 on a state-of-the-art Intel Stratix 10 FPGA.", "Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply linear add, Kulisch accumulation and tapered encodings from Gustafson's posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9 top-1 and 0.2 top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8 38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96x the power and 1.12x the area of 8 32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59x the power and 0.68x the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well.", "", "", "The motivation for the X1 architecture described was to develop inexpensive commercial hardware suitable for solving large, real-world problems. Such an architecture must be systems oriented and flexible enough to execute any neural network algorithm and work cooperatively with existing hardware and software. The early application of neural networks must proceed in conjunction with existing technologies, both hardware and software. Using state-of-the-art technology and innovative architectural techniques, the author's architecture approaches the speed and cost of analog systems while retaining much of the flexibility of large, general-purpose parallel machines. The author has aimed at a particular set of applications and has made cost-performance tradeoffs accordingly. The goal is an architecture that could be considered a general-purpose microprocessor for neurocomputing", "Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging. Success requires optimization at both software and hardware levels. Low precision arithmetic and specifically low precision fixed-point number systems have become the standard for performing deep learning inference. However, representing non-uniform data and distributed parameters (e.g. weights) by using uniformly distributed fixed-point values is still a major drawback when using this number system. Recently, the posit number system was proposed, which represents numbers in a non-uniform manner. Therefore, in this paper we are motivated to explore using the posit number system to represent the weights of Deep Convolutional Neural Networks. However, we do not apply any quantization techniques and hence the network weights do not require re-training. The results of this exploration show that using the posit number system outperformed the fixed point number system in terms of accuracy and memory utilization.", "Convolutional neural networks (CNNs) have led to remarkable progress in a number of key pattern recognition tasks, such as visual scene understanding and speech recognition, that potentially enable numerous applications. Consequently, there is a significant need to deploy trained CNNs to resource-constrained embedded systems. Inference using pretrained modern deep CNNs, however, requires significant system resources, including computation, energy, and memory space. To enable efficient implementation of trained CNNs, a viable approach is to approximate the network with an implementation-friendly model with only negligible degradation in classification accuracy. We present Ristretto, a CNN approximation framework that enables empirical investigation of the tradeoff between various number representation and word width choices and the classification accuracy of the model. Specifically, Ristretto analyzes a given CNN with respect to numerical range required to represent weights, activations, and intermediate results of convolutional and fully connected layers, and subsequently, it simulates the impact of reduced word width or lower precision arithmetic operators on the model accuracy. Moreover, Ristretto can fine-tune a quantized network to further improve its classification accuracy under a given number representation and word width configuration. Given a maximum classification accuracy degradation tolerance of 1 , we use Ristretto to demonstrate that three ImageNet networks can be condensed to use 8-bit dynamic fixed point for network weights and activations. Ristretto is available as a popular open-source software project 1 and has already been viewed over 1 000 times on Github as of the submission of this brief. 1 https: github.com pmgysel caffe", "", "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyper-parameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE half-precision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward- and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to half-precision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
Aside from the BFP numerical format, Narang explored mixed-precision floating point @cite_11 using 16-bit floating point weights, activations, and gradients during both the forward and backward passes. To prevent accuracy loss caused by underflow in 16-bit floating point, the weights are updated with 32-bit floating point. Additionally, to prevent gradients with very small magnitude from becoming zero when represented by 16-bit floating point, a new loss scaling approach is proposed.
{ "cite_N": [ "@cite_11" ], "mid": [ "2963112338" ], "abstract": [ "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyper-parameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE half-precision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward- and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to half-precision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
Recently, Wang and Mellempudi propose a method to reduce the bit-precision of weights, activations, and gradients to 8 bits by exhaustively analyzing DNN parameters during training @cite_7 @cite_4 . In @cite_4 , a new chunk-based addition is presented to solve the truncation issue caused by the addition of large- and small-magnitude numbers, thus successfully reducing the number of bits for the accumulator and weight updates to 16 bits. To mitigate requiring loss scaling in mixed-precision floating point training, Kalamkar @cite_15 proposed the brain floating point (BFLOAT-16) half-precision format with a reduced 8-bit fractional precision and similar dynamic range (7-bit exponent) to 32-bit floating point. A side effect of this representation is that the conversion complexity between these BFLOAT-16 and IEEE floating point is reduced during training. In training a ResNet model on the ImageNet dataset, BFLOAT-16s achieve the same performance as 32-bit floating point.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_7" ], "mid": [ "2947629474", "2946955515", "2889797931" ], "abstract": [ "This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.", "Reduced precision computation for deep neural networks is one of the key areas addressing the widening compute gap driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18 34 50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.", "The state-of-the-art hardware platforms for training deep neural networks are moving from traditional single precision (32-bit) computations towards 16 bits of precision - in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. However, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. Here we demonstrate, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. In addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. The use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4 times improved throughput over today's systems." ] }
1907.13216
2965944882
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training.
This research builds on earlier studies @cite_31 @cite_35 @cite_9 @cite_19 @cite_33 and for the first time studies feedforward neural network training with posits on MNIST and Fashion MNIST datasets .
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_9", "@cite_19", "@cite_31" ], "mid": [ "2924943819", "2958208994", "2963974650", "2899063892", "2963711383" ], "abstract": [ "Deep neural networks (DNNs) have been demonstrated as effective prognostic models across various domains, e.g. natural language processing, computer vision, and genomics. However, modern-day DNNs demand high compute and memory storage for executing any reasonably complex task. To optimize the inference time and alleviate the power consumption of these networks, DNN accelerators with low-precision representations of data and DNN parameters are being actively studied. An interesting research question is in how low-precision networks can be ported to edge-devices with similar performance as high-precision networks. In this work, we employ the fixed-point, floating point, and posit numerical formats at ≤8-bit precision within a DNN accelerator, Deep Positron, with exact multiply-and-accumulate (EMAC) units for inference. A unified analysis quantifies the trade-offs between overall network efficiency and performance across five classification tasks. Our results indicate that posits are a natural fit for DNN inference, outperforming at ≤8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.", "The posit number system is arguably the most promising and discussed topic in Arithmetic nowadays. The recent breakthroughs claimed by the format proposed by John L. Gustafson have put posits in the spotlight. In this work, we first describe an algorithm for multiplying two posit numbers, even when the number of exponent bits is zero. This configuration, scarcely tackled in literature, is particularly interesting because it allows the deployment of a fast sigmoid function. The proposed multiplication algorithm is then integrated as a template into the well-known FloPoCo framework. Synthesis results are shown to compare with the floating point multiplication offered by FloPoCo as well. Second, the performance of posits is studied in the scenario of Neural Networks in both training and inference stages. To the best of our knowledge, this is the first time that training is done with posit format, achieving promising results for a binary classification problem even with reduced posit configurations. In the inference stage, 8-bit posits are as good as floating point when dealing with the MNIST dataset, but lose some accuracy with CIFAR-10.", "Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging. Success requires optimization at both software and hardware levels. Low precision arithmetic and specifically low precision fixed-point number systems have become the standard for performing deep learning inference. However, representing non-uniform data and distributed parameters (e.g. weights) by using uniformly distributed fixed-point values is still a major drawback when using this number system. Recently, the posit number system was proposed, which represents numbers in a non-uniform manner. Therefore, in this paper we are motivated to explore using the posit number system to represent the weights of Deep Convolutional Neural Networks. However, we do not apply any quantization techniques and hence the network weights do not require re-training. The results of this exploration show that using the posit number system outperformed the fixed point number system in terms of accuracy and memory utilization.", "Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply linear add, Kulisch accumulation and tapered encodings from Gustafson's posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9 top-1 and 0.2 top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8 38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96x the power and 1.12x the area of 8 32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59x the power and 0.68x the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well.", "" ] }
1907.13314
2965544063
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
Multiple studies have surveyed existing evaluation practices. Lam @cite_59 suggest that it is reasonable to generate a taxonomy of evaluation studies by defining scenarios of evaluation practices that are common in the literature. Their extensive survey is unique and provides many insights for researchers. Specifically, seven scenarios of evaluation practices are discussed along with the goals of each, with examplar studies and methods used in each scenario. Isenberg @cite_48 continue this effort by extending the number of surveyed studies and introducing an eighth scenario of evaluation practices. These studies helped us build the backbone of our taxonomy as explained in Section . The initial code to group evaluation methods in our survey was derived from Lam and Isenberg . We then gradually modified the coding of evaluation methods according to the studies we surveyed. In contrast to the grouping approach according to common evaluation practices taken by previous surveys, we focus on grouping evaluation methods based on the similarities in each method's (sub)activities, with the ultimate goal of analyzing the potential risks associated with them, rather than simply describing the existing evaluation practices.
{ "cite_N": [ "@cite_48", "@cite_59" ], "mid": [ "1992743299", "2058203255" ], "abstract": [ "We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by [2012]. The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90 of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity.", "We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study." ] }
1907.13314
2965544063
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
An early study that introduces McGrath's work to the information visualization evaluation context is done by Carpendale @cite_67 , who provides a summary of different quantitative, qualitative and mixed methodologies along with a discussion about their limitations and challenges. A more recent work by Crisan and Elliott @cite_2 revisits quantitative, qualitative and mixed methodologies and provides guidance on when and how to correctly apply them. Instead of taking a general view of behavioral methodologies, we use a unified lens to identify limitations in evaluation methods used to prove usefulness, which may follow different methodologies, but are indeed used with summative intentions. Similar to Crisan and Elliott, we use validity and generalizability as our analysis criteria and add the feasibility criterion to the analysis to determine the level of applicability of the methods.
{ "cite_N": [ "@cite_67", "@cite_2" ], "mid": [ "52809394", "2912962050" ], "abstract": [ "Information visualization research is becoming more established, and as a result, it is becoming increasingly important that research in this field is validated. With the general increase in information visualization research there has also been an increase, albeit disproportionately small, in the amount of empirical work directly focused on information visualization. The purpose of this chapter is to increase awareness of empirical research in general, of its relationship to information visualization in particular; to emphasize its importance; and to encourage thoughtful application of a greater variety of evaluative research methodologies in information visualization.", "Evaluative practices within vis research are not routinely compared to those of psychology, sociology, or other areas of empirical study, leaving vis vulnerable to the replicability crisis that has embroiled scientific research more generally. In this position paper, we compare contemporary vis evaluative practices against those in those other disciplines, and make concrete recommendations as to how vis evaluative practice can be improved through the use of quantitative, qualitative, and mixed research methods. We summarize our discussion and recommendations as a checklist, that we intend to be used a resource for vis researchers conducting evaluative studies, and for reviewers evaluating the merits of such studies." ] }
1907.13314
2965544063
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
One argument made by Munzner @cite_15 was the necessity of summative evaluation during each stage of design studies to evaluate the outcome of that individual stage. Sedlmair @cite_49 and Mckenna @cite_27 made similar arguments while describing the process of design studies. They make the case for considering non-quantitative methods, such as heuristic evaluation, for summative purposes. While the Munzner's nested model @cite_15 essentially prescribes evaluation methods based on the development stage, we focus our analysis and prescription based on the activities performed during evaluation, and judge the quality of evaluation findings (evidence of usefulness) based on the amount of risk introduced by the involved activities. Further, our approach adapts to different evaluation instances and prescribes relatively smaller number of potential evaluation methods, compared to @cite_15 .
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_49" ], "mid": [ "2084776154", "2142493242", "1970569592" ], "abstract": [ "An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research.", "We present a nested model for the visualization design and validation with four layers: characterize the task and data in the vocabulary of the problem domain, abstract into operations and data types, design visual encoding and interaction techniques, and create algorithms to execute techniques efficiently. The output from a level above is input to the level below, bringing attention to the design challenge that an upstream error inevitably cascades to all downstream levels. This model provides prescriptive guidance for determining appropriate evaluation approaches by identifying threats to validity unique to each level. We also provide three recommendations motivated by this model: authors should distinguish between these levels when claiming contributions at more than one of them, authors should explicitly state upstream assumptions at levels above the focus of a paper, and visualization venues should accept more papers on domain characterization.", "Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance available about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies, as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies. Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines. We characterize two axes - a task clarity axis from fuzzy to crisp and an information location axis from the domain expert's head to the computer - and use these axes to reason about design study contributions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn, winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline potential pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research." ] }
1907.13495
2965311640
We develop a novel hierarchy for zero-dimensional persistence pairs, i.e., connected components, which is capable of capturing more fine-grained spatial relations between persistence pairs. Our work is motivated by a lack of spatial relationships between features in persistence diagrams, leading to a limited expressive power. We build upon a recently-introduced hierarchy of pairs in persistence diagrams that augments the pairing stored in persistence diagrams with information about which components merge. Our proposed hierarchy captures differences in branching structure. Moreover, we show how to use our hierarchy to measure the spatial stability of a pairing and we define a rank function for persistence pairs and demonstrate different applications.
We refer the reader to Edelsbrunner and Harer @cite_8 for a detailed overview of persistence and related concepts. There are several related approaches for creating a hierarchy of persistence information. @cite_2 calculate a topological saliency of critical points in a scalar field based on their spatial arrangement. Critical points with low persistence that are isolated from other critical points have a higher saliency in this concept. These calculations yield saliency curves for different smoothing radii. While these curves permit a ranking of persistence pairs, they do not afford a description of their nesting behavior. Consequently, in contrast to our approach, the saliency approach is incapable of distinguishing some spatial rearrangements that leave persistence values and relative distances largely intact, such as moving all peaks towards each other. Bauer @cite_24 developed what we refer to in this paper as the regular persistence hierarchy. It is fully combinatorial and merely requires small changes of the pairing calculation of related critical points. This hierarchy was successfully used in determining cancellation sequences of critical points of surfaces. However, as shown in this paper, this hierarchy cannot distinguish between certain nesting relations.
{ "cite_N": [ "@cite_24", "@cite_2", "@cite_8" ], "mid": [ "2138017022", "", "2784708638" ], "abstract": [ "The goal of this thesis is to bring together two different theories about critical points of a scalar function and their relation to topology: Discrete Morse theory and Persistent homology. While the goals and fundamental techniques are different, there are certain themes appearing in both theories that closely resemble each other. In certain cases, the two threads can be joined, leading to new insights beyond the classical realm of one particular theory.Discrete Morse theory provides combinatorial equivalents of several core concepts of classical Morse theory, such as discrete Morse functions, discrete gradient vector fields, critical points, and a cancelation theorem for the elimination of critical points of a vector field. Because of its simplicity, it not only maintains the intuition of the classical theory but allows to surpass it in a certain sense by providing explicit and canonical constructions that would become quite complicated in the smooth setting.Persistent homology quantifies topological features of a function. It defines the birth and death of homology classes at critical points, identifies pairs of these (persistence pairs), and provides a quantitative notion of their stability (persistence).Whereas (discrete) Morse theory makes statements about the homotopy type of the sublevel sets of a function, persistence is concerned with their homology. While homology is an invariant of homotopy equivalences, the converse is not true: not every map inducing an isomorphism in homology is a homotopy equivalence. In this thesis we establish a connection between both theories and use this combination to solve problems that are not easily accessibly by any single theory alone. In particular, we solve the problem of minimizing the number of critical points of a function on a surface within a certain tolerance from a given input function.", "", "Combining concepts from topology and algorithms, this book delivers what its title promises: an introduction to the field of computational topology. Starting with motivating problems in both mathematics and computer science and building up from classic topics in geometric and algebraic topology, the third part of the text advances to persistent homology. This point of view is critically important in turning a mostly theoretical field of mathematics into one that is relevant to a multitude of disciplines in the sciences and engineering. The main approach is the discovery of topology through algorithms. The book is ideal for teaching a graduate or advanced undergraduate course in computational topology, as it develops all the background of both the mathematical and algorithmic aspects of the subject from first principles. Thus the text could serve equally well in a course taught in a mathematics department or computer science department." ] }
1907.13368
2965737826
The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner. By deploying deep learning and or handcrafted models in front-end devices, the compact features can be extracted and subsequently delivered to back-end cloud for search and advanced analytics. In this context, we propose a model generation, utilization, and communication paradigm, aiming to address a set of unique challenges for better artificial intelligence services in smart cities. In particular, we present an integrated multiple deep learning models reuse and prediction strategy, which greatly increases the feasibility of the digital retina in processing and analyzing the large-scale visual data in smart cities. The promise of the proposed paradigm is demonstrated through a set of experiments.
The deep neural network transmission aims to utilize and deliver the knowledge concentrated in the network model to facilitate different intelligent applications. In @cite_5 , the model compression is formulated from the perspective of transmission. As such, the redundancy among different models can be further exploited to facilitate many applications in front-end visual sensors. It is also shown that such scheme can be elegantly combined with the existing compression methods to form an integrated compression and communication framework.
{ "cite_N": [ "@cite_5" ], "mid": [ "2896413341" ], "abstract": [ "With the advances of artificial intelligence, recent years have witnessed a gradual transition from the big data to the big knowledge. Based on the knowledge-powered deep learning models, the big data such as the vast text, images and videos can be efficiently analyzed. As such, in addition to data, the communication of knowledge implied in the deep learning models is also strongly desired. As a specific example regarding the concept of knowledge creation and communication in the context of Knowledge Centric Networking (KCN), we investigate the deep learning model compression and demonstrate its promise use through a set of experiments. In particular, towards future KCN, we introduce efficient transmission of deep learning models in terms of both single model compression and multiple model prediction. The necessity, importance and open problems regarding the standardization of deep learning models, which enables the interoperability with the standardized compact model representation bitstream syntax, are also discussed." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
There is a long-standing thread of research on robustness in the classical control community, and the literature in this area is vast, with the @math method being a standard approach [] doyle2013feedback . This approach was introduced into reinforcement learning by @cite_7 . In that paper, a continuous time reinforcement learning setting was studied for which a max-min problem was formulated involving a modified value function, the optimal solutions of which can be determined by solving Hamilton-Jacobi-Isaacs (HJI) equation.
{ "cite_N": [ "@cite_7" ], "mid": [ "2105078254" ], "abstract": [ "This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H∞ control, we consider a differential game in which a \"disturbing\" agent tries to make the worst possible disturbance while a \"control\" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H∞ control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
The CVaR criterion is also adopted in @cite_13 , in which, rather than sampling trajectories and finding a quantile in terms of performance, two policies are trained simultaneously: a protagonist'' which aims to optimise performance, and an adversary which aims to disrupt the protagonist. The protagonist and adversary train alternatively, with one being fixed whilst the other adapts. The action space for the adversary, in the tests documented in the paper includes forces on the entities (InvertedPendulum, HalfCheetah, Swimmer, Hopper, Walker2D) that aim to destabalise it. We made comparisons against this algorithm in our experiments.
{ "cite_N": [ "@cite_13" ], "mid": [ "2602963933" ], "abstract": [ "Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H∞ control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced - that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper, Walker2d and Ant) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training test conditions; and c) outperform the baseline even in the absence of the adversary." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
More recently, @cite_2 studies robustness with respect to action perturbations. There are two forms of perturbation addressed: (i) Probabilistic Action Robust MDP (PR-MDP), and (ii) Noisy Action Robust MDP (NR-MDP). In PR-MDP, when an action is taken by an agent, with probability @math , a different, possibly adversarial action is taken instead. In NR-MDP, when an action is taken, a perturbation is added to the action itself. Like @cite_11 and @cite_13 , the algorithm is suitable for applying deep neural networks, and the paper reports experiments on InvertedPendulum, Hopper, Walker2d and Humanoid. We tested against PR-MDP in some of our experiments, and found it to be lacking in robustness (see Section , Figure and Figure ).
{ "cite_N": [ "@cite_11", "@cite_13", "@cite_2" ], "mid": [ "2964173023", "2602963933", "2952981100" ], "abstract": [ "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from the target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning.", "Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H∞ control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced - that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper, Walker2d and Ant) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training test conditions; and c) outperform the baseline even in the absence of the adversary.", "" ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
In @cite_4 a non-stationary Markov Decision Process model is considered, where the dynamics can change from one time step to another. The constraint is based on Wasserstein distance, specifically, the Wasserstein distance between dynamics at time @math and @math is bounded by @math , i.e., is @math -Lipschitz with respect to time, for some constant @math . They approach the problem by treating nature as an adversary and implement a Minimax algorithm. The basis of their algorithm is that due to the fact that the dynamics changes slowly (due to the Lipschitz constraint), a planning algorithm can project into the future the scope of possible future dynamics and plan for the worst. The resulting algorithm, known as , is - as the name implies - a tree search algorithm. It operates on a sequence snapshots'' of the evolving MDP, which are instances of the MDP at points in time. The algorithm is tested on small grid world, and does not appear to be readily extendible to the continuous state and action scenarios our algorithm addresses.
{ "cite_N": [ "@cite_4" ], "mid": [ "2942369252" ], "abstract": [ "This work tackles the problem of robust zero-shot planning in non-stationary stochastic environments. We study Markov Decision Processes (MDPs) evolving over time and consider Model-Based Reinforcement Learning algorithms in this setting. We make two hypotheses: 1) the environment evolves continuously and its evolution rate is bounded, 2) a current model is known at each decision epoch but not its evolution. Our contribution can be presented in four points. First, we define this specific class of MDPs that we call Non-Stationary MDPs (NSMDPs). We introduce the notion of regular evolution by making an hypothesis of Lipschitz-Continuity on the transition and reward functions w.r.t. time. Secondly, we consider a planning agent using the current model of the environment, but unaware of its future evolution. This leads us to consider a worst-case method where the environment is seen as an adversarial agent. Third, following this approach, we propose the Risk-Averse Tree-Search (RATS) algorithm. This is a zero-shot Model-Based method similar to Minimax search. Finally, we illustrate the benefits brought by RATS empirically and compare its performance with reference Model-Based algorithms." ] }
1907.13196
2966684444
Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world. This paper proposes WR @math L; a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We contribute both theoretically and empirically. On the theory side, we prove that WR @math L converges to a stationary point in the general setting of continuous state and action spaces. Empirically, we demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments.
To summarise, our paper uses the Wasserstein distance for addressing, in common with @cite_4 , but is suited to applying deep neural networks for continuous state and action spaces. Our paper does not require a full dynamics available to it, merely a parameterisable dynamics. It competes well with the above papers, and operates well for high dimensional problems, as evidenced by the experiments.
{ "cite_N": [ "@cite_4" ], "mid": [ "2942369252" ], "abstract": [ "This work tackles the problem of robust zero-shot planning in non-stationary stochastic environments. We study Markov Decision Processes (MDPs) evolving over time and consider Model-Based Reinforcement Learning algorithms in this setting. We make two hypotheses: 1) the environment evolves continuously and its evolution rate is bounded, 2) a current model is known at each decision epoch but not its evolution. Our contribution can be presented in four points. First, we define this specific class of MDPs that we call Non-Stationary MDPs (NSMDPs). We introduce the notion of regular evolution by making an hypothesis of Lipschitz-Continuity on the transition and reward functions w.r.t. time. Secondly, we consider a planning agent using the current model of the environment, but unaware of its future evolution. This leads us to consider a worst-case method where the environment is seen as an adversarial agent. Third, following this approach, we propose the Risk-Averse Tree-Search (RATS) algorithm. This is a zero-shot Model-Based method similar to Minimax search. Finally, we illustrate the benefits brought by RATS empirically and compare its performance with reference Model-Based algorithms." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
First of all, gesture-based text-entry allows drawing-like typing @cite_17 @cite_26 . Drawing-like typing removes the need for localizing each key position and users can start drawing from any place on the screen in an eyes-free manner. Though gesture-based text-entry offers concise eyes-free typing interfaces, it requires gesture recognition algorithms, which can hardly achieve a high accuracy @cite_4 . Gesture variability among users and similarities between gestures for each key cause ambiguity that increases the inherent difficulty of sequence classification @cite_21 . In addition, gesture-based text-entry takes longer time than other text-entry methods since each key involves a gesture rather than a touch or a key press. The proposed I-Keyboard targets a more tractable decoding problem and utilizes deep learning techniques to successfully deal with the variability among users.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_21", "@cite_17" ], "mid": [ "", "1252865438", "2149645715", "1964873502" ], "abstract": [ "", "In this paper, we propose a complete gesture recognition framework based on maximum cosine similarity and fast nearest neighbor (NN) techniques, which offers high-recognition accuracy and great computational advantages for three fundamental problems of gesture recognition: 1) isolated recognition; 2) gesture verification; and 3) gesture spotting on continuous data streams. To support our arguments, we provide a thorough evaluation on three large publicly available databases, examining various scenarios, such as noisy environments, limited number of training examples, and time delay in system’s response. Our experimental results suggest that this simple NN-based approach is quite accurate for trajectory classification of digits and letters and could become a promising approach for implementations on low-power embedded systems.", "Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported.", "KeyScretch is a text entry method for devices equipped with touch-screens, based on a menu-augmented soft keyboard. In these keyboards, a menu containing a small number of frequent characters is shown, while a key is pressed, allowing further character entry by menu selection. KeyScretch improves the previously studied menu-based methods by enabling the interpretation of compound strokes, which allow the input of text chunks longer than two characters. The performance of the method is analyzed on different kinds of touch-screens: First, we present a 25-session user study on a stylus-based device, showing that an instance of the method optimized for Italian can be learned in a reasonable time by the users and significantly outperforms the traditional method based on the tapping interaction. Then, we define and validate a model for predicting expert text entry rates on finger-based devices. The predicted rates for instances of KeyScretch optimized for different Western languages vary from about 44-50 words min on the Qwerty layout, enabling improvements in the range of 30-49 as compared with the traditional method." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
For the second point, optimized text-entry supplies accessible and comfortable typing interfaces by optimizing the size, shape, and position of keys @cite_8 @cite_5 . Current optimized text-entry methods require users to learn new typing interfaces @cite_1 because knowledge transfer seldom occurs for novel typing interfaces. Furthermore, the optimization process frequently demands a calibration step. The calibration step complicates the usage of the optimized text-entries. I-Keyboard proposed in this paper does not involve learning and calibration processes since it operates with ten fingers and its decoding algorithm does not need any prior knowledge.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_8" ], "mid": [ "", "2084486632", "2003785947" ], "abstract": [ "", "In this paper the authors propose IPPITSU, an eyes-free, Braille-based text entry method for touch panels. In IPPITSU the user inputs a Braille cell by selecting the raised dot in it. In order to select the dots the user touches the regions corresponding to the raised dots by sliding the finger on the panel with one continuous stroke. IPPITSU is expected to be used by visually-impaired people. IPPITSU is implemented as an IME for Android devices and is available on Google Play.", "Current soft QWERTY keyboards often consume a large portion of the screen space on portable touchscreens. This space consumption can diminish the overall user experi-ence on these devices. In this paper, we present the 1Line keyboard, a soft QWERTY keyboard that is 140 pixels tall (in landscape mode) and 40 of the height of the native iPad QWERTY keyboard. Our keyboard condenses the three rows of keys in the normal QWERTY layout into a single line with eight keys. The sizing of the eight keys is based on users' mental layout of a QWERTY keyboard on an iPad. The system disambiguates the word the user types based on the sequence of keys pressed. The user can use flick gestures to perform backspace and enter, and tap on the bezel below the keyboard to input a space. Through an evaluation, we show that participants are able to quickly learn how to use the 1Line keyboard and type at a rate of over 30 WPM after just five 20-minute typing sessions. Using a keystroke level model, we predict the peak expert text entry rate with the 1Line keyboard to be 66--68 WPM." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Last but not least, imaginary keyboards, which are invisible to users, save invaluable screen resources and enables multi-tasking in the context of mobile computing @cite_27 . Imaginary keyboards reduce constraints during interaction and users can freely and comfortably deliver their messages. In addition, the imaginary keyboards coincide with the potent vision for user interfaces (UI) which has evolved from mechanical, graphical and gestural UI to imaginary UI, achieving tighter embodiment and more directness @cite_18 . Conventional works on imaginary UI, however, have only shown the feasibility not reaching the practical deployment level. Our I-Keyboard proposes a new concrete concept for imaginary UI and demonstrates the practical implementation of the concept deployed in a real-world environment.
{ "cite_N": [ "@cite_27", "@cite_18" ], "mid": [ "2536453717", "1913376911" ], "abstract": [ "The lack of dedicated multitasking interface features in smartphones has resulted in users attempting a sequential form of multitasking via frequent app switching. In addition to the obvious temporal cost, it requires physical and cognitive effort which increases multifold as the back and forth switching becomes more frequent. We propose porous interfaces, a paradigm that combines the concept of translucent windows with finger identification to support efficient multitasking on small screens. Porous interfaces enable partially transparent app windows overlaid on top of each other, each of them being accessible simultaneously using a different finger as input. We design porous interfaces to include a broad range of multitasking interactions with and between windows, while ensuring fidelity with the existing smartphone interactions. We develop an end-to-end smartphone interface that demonstrates porous interfaces. In a qualitative study, participants found porous interfaces intuitive, easy, and useful for frequent multitasking scenarios.", "There have been several recent examples of user interface techniques in which the user uses a computational device by physically manipulating the device. This paper proposes that these form an interesting new paradigm for user interface design, Embodied User Interfaces. This paper presents and defines this paradigm, and places it in the evolution of user interface paradigms leading towards the ideal of an invisible user interface. This paper outlines the space of design possibilities in this paradigm, presents a design framework for embodied user interface design, and articulates a set of design principles to guide design." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Ten finger typing is one of the most natural and common text-entry methods @cite_28 . Users can achieve typing speed of 60 - 100 words per minute (WPM) by ten finger typing on physical keyboards @cite_19 . Ten finger typing experience stored in muscle memory and tactile feedback from mechanical keys enable eyes-free typing @cite_13 . However, transferring ten-finger typing knowledge from mechanical keyboards to soft keyboards does not successfully occur in general due to lack of tactile feedback, though a few works have shown the viability in special use cases @cite_22 @cite_32 .
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_32", "@cite_19", "@cite_13" ], "mid": [ "2171287854", "2164552991", "", "2795166434", "2149219788" ], "abstract": [ "This paper introduces a new text input device called the chording glove. The keys of a chord keyboard are mounted on the fingers of a glove. A chord can be made by pressing the fingers against any surface. Shift buttons placed on the index finger enable the glove to enter the full ASCII character set. The chording glove is designed as a text input device for wearable computers and virtual environments. An experiment was conducted to assess the performance of the glove. After an average of 80 min of a tutorial, ten subjects reached a continuous text input speed of 8.9 spl plusmn 1.4 words min, and after 10 1-hr sessions, they achieved 16.8 spl plusmn 2.5 words min.", "Previously we demonstrated that after 400 minutes of practice, ten novices averaged over 26 words per minute (wpm) for text entry on the Twiddler one-handed chording keyboard, outperforming the multitap mobile text entry standard. We present a study that examines expert chording performance. Our five participants achieved an average rate of 47 wpm after approximately 25 hours of practice in varying conditions. One subject achieved a rate of 67 wpm, equivalent to the typing rate of the last author who has been a Twiddler user for ten years. We analyze the effects of learning on various aspects of chording, provide evidence that lack of visual feedback does not hinder expert typing speed and examine the potential use of multicharacter chords (MCCs) to increase text entry speed.", "", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "In a mobile environment, the amount of visual attention a person can devote to a computer is often limited. In addition to typing rapidly and accurately, it is important to be able to enter text with limited visual feedback. Previously we found that users can effectively type in such \"blind\" conditions with the Twiddler one-handed keyboard. In this paper we examine blind typing on mini-QWERTY keyboards and introduce a taxonomy for blind mobile text Input. We present a study in which eight expert mini-QWERTY typists participated in 5 typing sessions. Each session consists of three twenty minute typing conditions. In the first condition, the control or \"normal\" condition, the participant had full visual access to both the keyboard and the display. In the second condition, \"single blind\" we obstructed view of the keyboard. The final \"double blind\" condition also reduced visual feedback from the display. In contrast to our Twiddler work, we found that in the visually impaired conditions, typing rate and accuracy suffer, never reaching the non-blind rates. Across the blind mini-QWERTY conditions our participants averaged 45.8 wpm at 85.6 accuracy, while blind typing on the Twiddler averaged 47.3 wpm at 93.9 accuracy. We discuss these results in the context of our previous blind typing work and examine the trade-offs between the different keyboards for mobile and wearable computer use." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
A number of works have attempted to understand the user behavior with ten finger typing with soft keyboards. The major findings are as follows: : The typing speed with soft keyboards drops dramatically compared to the speed with physical keyboards @cite_12 . : The distribution of touch points resembles the mechanical keyboard layout @cite_7 , though the distribution varies over time in shape and size @cite_19 . : Hand-drift occurs over time and becomes stronger for invisible keyboards @cite_25 . : Various factors cause tap variability among users @cite_20 . The factors include finger volume, hand posture and mobility.
{ "cite_N": [ "@cite_7", "@cite_19", "@cite_12", "@cite_25", "@cite_20" ], "mid": [ "1972648436", "2795166434", "2169710492", "2164067526", "2731419753" ], "abstract": [ "Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "Text entry rates are explored for several variations of soft keyboards. We present a model to predict novice and expert entry rates and present an empirical test with 24 subjects. Six keyboards were examined: the Qwerty, ABC, Dvorak, Fitaly, JustType, and telephone. At 8-10 wpm, novice predictions are low for all layouts because the dominant factor is the visual scan time, rather than the movement time. Expert predictions are in the range of 22-56 wpm, although these were not tested empirically. In a quick, novice test with a representative phrase of text, subjects achieved rates of 20.2 wpm (Qwerty), 10.7 wpm (ABC), 8.5 wpm (Dvorak), 8.0 wpm (Fitaly), 7.0 wpm (JustType), and 8.0 wpm (telephone). The Qwerty rate of 20.2 wpm is consistent with observations in other studies. The relatively high rate for Qwerty suggests that there is skill transfer from users' familiarity with desktop computers to the stylus tapping task.", "On a touchscreen keyboard, it can be difficult to continuously type without frequently looking at the keys. One factor contributing to this difficulty is called hand drift, where a user's hands gradually misalign with the touchscreen keyboard due to limited tactile feedback. Although intuitive, there remains a lack of empirical data to describe the effect of hand drift. A formal understanding of it can provide insights for improving soft keyboards. To formally quantify the degree (magnitude and direction) of hand drift, we conducted a 3-session study with 13 participants. We measured hand drift with two typing interfaces: a visible conventional keyboard and an invisible adaptive keyboard. To expose drift patterns, both keyboards used relaxed letter disambiguation to allow for unconstrained movement. Findings show that hand drift occurred in both interfaces, at an average rate of 0.25mm min on the conventional keyboard and 1.32mm min on the adaptive keyboard. Participants were also more likely to drift up and or left instead of down or right.", "Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen. Our hypothesis is that users can transfer their typing ability obtained from visible keyboards to eyes-free use. We propose two statistical decoding algorithms to infer users’ eyes-free input: the absolute algorithm and the relative algorithm. The absolute algorithm infers user input based on the absolute position of touch endpoints, while the relative algorithm infers based on the vectors between successive touch endpoints. Evaluation results showed users could achieve satisfying performance with both algorithms. Text entry rate was 17-23 WPM (words per minute) depending on the algorithm used. In comparison, a baseline cursor-based text entry method yielded only 7.66 WPM. In conclusion, our research demonstrates for the first time the feasibility of thumb-based eyes-free typing, which provides a new possibility for text entry on ubiquitous computing platforms such as smart TVs and HMDs." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
In summary, ten-finger eyes-free typing on virtual keyboards, which is most natural and easy to transfer knowledge directly from physical keyboards @cite_19 , is feasible according to the previous research results, though a couple of obstacles need to be resolved. The proposed DND handles hand drift, tab variability and automatic calibration with a deep neural architecture to improve typing speed and to reduce error rate.
{ "cite_N": [ "@cite_19" ], "mid": [ "2795166434" ], "abstract": [ "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Classical statistical decoding algorithms translate user inputs (key strokes) into characters or words using probabilistic models. These statistical decoding algorithms have proved their effectiveness in a few controlled environments @cite_24 . The goal of statistical decoding is to find the sequence of characters that maximizes the joint probability of the given user input sequence. Mathematically, the user typing pattern for the decoding process is formulated as where @math 's are the position of each key stroke, @math , @math 's are the characters and @math is the length of the sequence. Since the complexity of modeling the joint probability becomes untractable as the sequence length increases, the independence assumption is employed in most cases. By assuming the independence property, ) becomes The probability @math is approximated by a Gaussian distribution with Markov-Bayesian algorithm @cite_19 or a bivariate Gaussian distribution @cite_3 in conventional approaches. In addition, the probability is separately modelled for left and right hands.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_3" ], "mid": [ "2618581749", "2795166434", "2795433822" ], "abstract": [ "Abstract Typing on tiny QWERTY keyboards on smartwatches is considered challenging or even impractical due to the limited screen space. In this paper, we describe three user studies undertaken to investigate users’ typing abilities and preferences on tiny QWERTY keyboards. The first two studies, using a smartphone as a substitute for a smartwatch, tested five different keyboard sizes (2, 2.5, 3, 3.5 and 4 cm). Study 1 collected typing data from participants using keyboards and given asterisk feedback. We analyzed both the distribution of touch points (e.g., the systematic offset and shape of the distribution) and the effect of keyboard size. Study 2 adopted a Bayesian algorithm based on a touch model derived from Study 1 and a unigram word language model to perform input prediction. We found that on the smart keyboard, participants could type between 26.8 and 33.6 words per minute (WPM) across the five keyboard sizes with an uncorrected character error rate ranging from 0.4 to 1.9 . Participants’ subjective feedback indicated that they felt most comfortable with keyboards larger than 2.5 cm. Study 3 replicated the 3.0 and 3.5 cm keyboard tests on a real smartwatch and verified that in terms of text entry speed, error rate and user preference, there was no significant difference between the results measured on a smartphone and that on a smartwatch with same sized keys. This study result indicated that the results of Study 1 and 2 are applicable to smartwatch devices. Finally, we conducted a simulation to investigate the performance of different touch language models based on our collected data. The results showed that using either a bigram language model or a detailed touch model can effectively correct imprecision in users’ input. Our results suggest that achieving satisfactory levels of text input on tiny QWERTY keyboards is possible.", "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.", "A virtual keyboard takes a large portion of precious screen real estate. We have investigated whether an invisible keyboard is a feasible design option, how to support it, and how well it performs. Our study showed users could correctly recall relative key positions even when keys were invisible, although with greater absolute errors and overlaps between neighboring keys. Our research also showed adapting the spatial model in decoding improved the invisible keyboard performance. This method increased the input speed by 11.5 over simply hiding the keyboard and using the default spatial model. Our 3-day multi-session user study showed typing on an invisible keyboard could reach a practical level of performance after only a few sessions of practice: the input speed increased from 31.3 WPM to 37.9 WPM after 20 - 25 minutes practice on each day in 3 days, approaching that of a regular visible keyboard (41.6 WPM). Overall, our investigation shows an invisible keyboard with adapted spatial model is a practical and promising interface option for the mobile text entry systems." ] }
1907.13285
2964987353
Text-entry aims to provide an effective and efficient pathway for humans to deliver their messages to computers. With the advent of mobile computing, the recent focus of text-entry research has moved from physical keyboards to soft keyboards. Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens. To tackle these limitations, we propose a fully imaginary keyboard (I-Keyboard) with a deep neural decoder (DND). The invisibility of I-Keyboard maximizes the usability of mobile devices and DND empowered by a deep neural architecture allows users to start typing from any position on the touch screens at any angle. To the best of our knowledge, the eyes-free ten-finger typing scenario of I-Keyboard which does not necessitate both a calibration step and a predefined region for typing is first explored in this work. For the purpose of training DND, we collected the largest user data in the process of developing I-Keyboard. We verified the performance of the proposed I-Keyboard and DND by conducting a series of comprehensive simulations and experiments under various conditions. I-Keyboard showed 18.95 and 4.06 increases in typing speed (45.57 WPM) and accuracy (95.84 ), respectively over the baseline.
Conventional statistical decoding algorithms, however, cannot perfectly deal with the complex dynamics of user inputs. The independence assumption applied in these methods cannot count both long-term and short-term dependencies among the key strokes. The independence assumption confines the conventional approaches to regard only the current input. Furthermore, the previous research outcomes have proposed fixed models for statistical decoding, thus they cannot adaptively handle hand drift and tap variability that vary over time. Though a few have designed adaptive models, those models require either an additional calibration step or a controlled experiment environment @cite_19 .
{ "cite_N": [ "@cite_19" ], "mid": [ "2795166434" ], "abstract": [ "Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2 to 92.1 . In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6 . And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience." ] }
1907.13463
2966452173
Zeroth-order (gradient-free) method is a class of powerful optimization tool for many machine learning problems because it only needs function values (not gradient) in the optimization. In particular, zeroth-order method is very suitable for many complex problems such as black-box attacks and bandit feedback, whose explicit gradients are difficult or infeasible to obtain. Recently, although many zeroth-order methods have been developed, these approaches still exist two main drawbacks: 1) high function query complexity; 2) not being well suitable for solving the problems with complex penalties and constraints. To address these challenging drawbacks, in this paper, we propose a novel fast zeroth-order stochastic alternating direction method of multipliers (ADMM) method (, ZO-SPIDER-ADMM) with lower function query complexity for solving nonconvex problems with multiple nonsmooth penalties. Moreover, we prove that our ZO-SPIDER-ADMM has the optimal function query complexity of @math for finding an @math -approximate local solution, where @math and @math denote the sample size and dimension of data, respectively. In particular, the ZO-SPIDER-ADMM improves the existing best nonconvex zeroth-order ADMM methods by a factor of @math . Moreover, we propose a fast online ZO-SPIDER-ADMM ( ZOO-SPIDER-ADMM). Our theoretical analysis shows that the ZOO-SPIDER-ADMM has the function query complexity of @math , which improves the existing best result by a factor of @math . Finally, we utilize a task of structured adversarial attack on black-box deep neural networks to demonstrate the efficiency of our algorithms.
* -8pt ADMM @cite_35 @cite_22 is a popular optimization method in solving the composite and constrained problems in machine learning. Due to the flexibility in splitting the objective function into loss and complex penalty, the ADMM can relatively easily solve some problems with complicated structure penalty such as the graph-guided fused lasso @cite_7 , which are too complicated for the other popular optimization methods such as proximal gradient methods @cite_14 . Thus, ADMM has been widely studied in recent years @cite_1 @cite_12 . For large-scale optimization, some stochastic ADMM methods @cite_3 @cite_15 @cite_10 @cite_26 have been proposed. In fact, the ADMM method is also successful in solving many nonconvex machine learning problems such as training neural networks @cite_21 . Thus, the nonconvex ADMM and its stochastic version methods have been developed in @cite_34 @cite_33 @cite_9 @cite_20 . At the same time, the nonconvex stochastic ADMM methods @cite_0 @cite_6 have been studied.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_33", "@cite_9", "@cite_21", "@cite_1", "@cite_20", "@cite_3", "@cite_6", "@cite_0", "@cite_15", "@cite_34", "@cite_10", "@cite_12" ], "mid": [ "2045079045", "2100556411", "2962945188", "2164278908", "2102241087", "2962853966", "2295652899", "2346438296", "2964345095", "2964271484", "2510516734", "2945566309", "2531036984", "38875623", "929087689", "2410606765", "2768931586" ], "abstract": [ "For variational problems of the form Infv∈V f(Av)+g(v) , we propose a dual method which decouples the difficulties relative to the functionals f and g from the possible ill-conditioning effects of the linear operator A. The approach is based on the use of an Augmented Lagrangian functional and leads to an efficient and simply implementable algorithm. We study also the finite element approximation of such problems, compatible with the use of our algorithm. The method is finally applied to solve several problems of continuum mechanics.", "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g. SAG-ADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1 T ) as opposed to O(1 T 2 ) of accelerated batch algorithms, where T is the number of iterations. Thus, there still remains a gap in convergence rates between existing stochastic ADMM and batch algorithms. To bridge this gap, we introduce the momentum acceleration trick for batch optimization into the stochastic variance reduced gradient based ADMM (SVRG-ADMM), which leads to an accelerated (ASVRG-ADMM) method. Then we design two different momentum term update rules for strongly convex and general convex cases. We prove that ASVRG-ADMM converges linearly for strongly convex problems. Besides having a low-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1 T ) to O(1 T 2 ). Our experimental results show the effectiveness of ASVRG-ADMM.", "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.", "Motivation: Many complex disease syndromes such as asthma consist of a large number of highly related, rather than independent, clinical phenotypes, raising a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. Although a causal genetic variation may influence a group of highly correlated traits jointly, most of the previous association analyses considered each phenotype separately, or combined results from a set of single-phenotype analyses. Results: We propose a new statistical framework called graph-guided fused lasso to address this issue in a principled way. Our approach represents the dependency structure among the quantitative traits explicitly as a network, and leverages this trait network to encode structured regularizations in a multivariate regression model over the genotypes and traits, so that the genetic markers that jointly influence subgroups of highly correlated traits can be detected with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently, our approach analyzes all of the traits jointly in a single statistical method to discover the genetic markers that perturb a subset of correlated triats jointly rather than a single trait. Using simulated datasets based on the HapMap consortium data and an asthma dataset, we compare the performance of our method with the single-marker analysis, and other sparse regression methods that do not use any structural information in the traits. Our results show that there is a significant advantage in detecting the true causal single nucleotide polymorphisms when we incorporate the correlation pattern in traits using our proposed methods. Availability: Software for GFlasso is available at http: www.sailing.cs.cmu.edu gflasso.html Contact:sssykim@cs.cmu.edu; ksohn@cs.cmu.edu;", "In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasi-norm, Schatten-q quasi-norm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.", "The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules.", "With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks. This is largely because conventional optimization algorithms rely on stochastic gradient methods that don't scale well to large numbers of cores in a cluster setting. Furthermore, the convergence of all gradient methods, including batch methods, suffers from common problems like saturation effects, poor conditioning, and saddle points. This paper explores an unconventional training method that uses alternating direction methods and Bregman iteration to train networks without gradient descent steps. The proposed method reduces the network training problem to a sequence of minimization substeps that can each be solved globally in closed form. The proposed method is advantageous because it avoids many of the caveats that make gradient methods slow on highly nonconvex problems. The method exhibits strong scaling in the distributed setting, yielding linear speedups even when split over thousands of cores.", "We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.", "Nonconvex and nonsmooth optimization problems are frequently encountered in much of statistics, business, science and engineering, but they are not yet widely recognized as a technology in the sense of scalability. A reason for this relatively low degree of popularity is the lack of a well developed system of theory and algorithms to support the applications, as is the case for its convex counterpart. This paper aims to take one step in the direction of disciplined nonconvex and nonsmooth optimization. In particular, we consider in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints. In the absence of coupled constraints, we show a sublinear rate of convergence to an ( )-stationary solution in the form of variational inequality for a generalized conditional gradient method, where the convergence rate is dependent on the Holderian continuity of the gradient of the smooth part of the objective. For the model with coupled affine constraints, we introduce corresponding ( )-stationarity conditions, and apply two proximal-type variants of the ADMM to solve such a model, assuming the proximal ADMM updates can be implemented for all the block variables except for the last block, for which either a gradient step or a majorization–minimization step is implemented. We show an iteration complexity bound of (O(1 ^2) ) to reach an ( )-stationary solution for both algorithms. Moreover, we show that the same iteration complexity of a proximal BCD method follows immediately. Numerical results are provided to illustrate the efficacy of the proposed algorithms for tensor robust PCA and tensor sparse PCA problems.", "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1 √t)) for convex functions and O(log t t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named Graph-Guided SVM is proposed to demonstrate the usefulness of our algorithm.", "", "In the paper, we study the stochastic alternating direction method of multipliers (ADMM) for the nonconvex optimizations, and propose three classes of the nonconvex stochastic ADMM with variance reduction, based on different reduced variance stochastic gradients. Specifically, the first class called the nonconvex stochastic variance reduced gradient ADMM (SVRG-ADMM), uses a multi-stage scheme to progressively reduce the variance of stochastic gradients. The second is the nonconvex stochastic average gradient ADMM (SAG-ADMM), which additionally uses the old gradients estimated in the previous iteration. The third called SAGA-ADMM is an extension of the SAG-ADMM method. Moreover, under some mild conditions, we establish the iteration complexity bound of @math of the proposed methods to obtain an @math -stationary solution of the nonconvex optimizations. In particular, we provide a general framework to analyze the iteration complexity of these nonconvex stochastic ADMM methods with variance reduction. Finally, some numerical experiments demonstrate the effectiveness of our methods.", "We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on alternating direction method of multipliers (ADMM) to deal with complex regularization functions such as structured regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently.", "The alternating direction method with multipliers (ADMM) has been one of most powerful and successful methods for solving various composite problems. The convergence of the conventional ADMM (i.e., 2-block) for convex objective functions has been justified for a long time, and its convergence for nonconvex objective functions has, however, been established very recently. The multi-block ADMM, a natural extension of ADMM, is a widely used scheme and has also been found very useful in solving various nonconvex optimization problems. It is thus expected to establish convergence theory of the multi-block ADMM under nonconvex frameworks. In this paper we present a Bregman modification of 3-block ADMM and establish its convergence for a large family of nonconvex functions. We further extend the convergence results to the @math -block case ( @math ), which underlines the feasibility of multi-block ADMM applications in nonconvex settings. Finally, we present a simulation study and a real-world application to support the correctness of the obtained theoretical assertions.", "The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size n. Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets.", "Alternating direction method of multipliers (ADMM) has received tremendous interest for solving numerous problems in machine learning, statistics and signal processing. However, it is known that the performance of ADMM and many of its variants is very sensitive to the penalty parameter of a quadratic penalty applied to the equality constraints. Although several approaches have been proposed for dynamically changing this parameter during the course of optimization, they do not yield theoretical improvement in the convergence rate and are not directly applicable to stochastic ADMM. In this paper, we develop a new ADMM and its linearized variant with a new adaptive scheme to update the penalty parameter. Our methods can be applied under both deterministic and stochastic optimization settings for structured non-smooth objective function. The novelty of the proposed scheme lies at that it is adaptive to a local sharpness property of the objective function, which marks the key difference from previous adaptive scheme that adjusts the penalty parameter per-iteration based on certain conditions on iterates. On theoretical side, given the local sharpness characterized by an exponent @math , we show that the proposed ADMM enjoys an improved iteration complexity of @math @math suppresses a logarithmic factor. in the deterministic setting and an iteration complexity of @math in the stochastic setting without smoothness and strong convexity assumptions. The complexity in either setting improves that of the standard ADMM which only uses a fixed penalty parameter. On the practical side, we demonstrate that the proposed algorithms converge comparably to, if not much faster than, ADMM with a fine-tuned fixed penalty parameter." ] }
1907.13463
2966452173
Zeroth-order (gradient-free) method is a class of powerful optimization tool for many machine learning problems because it only needs function values (not gradient) in the optimization. In particular, zeroth-order method is very suitable for many complex problems such as black-box attacks and bandit feedback, whose explicit gradients are difficult or infeasible to obtain. Recently, although many zeroth-order methods have been developed, these approaches still exist two main drawbacks: 1) high function query complexity; 2) not being well suitable for solving the problems with complex penalties and constraints. To address these challenging drawbacks, in this paper, we propose a novel fast zeroth-order stochastic alternating direction method of multipliers (ADMM) method (, ZO-SPIDER-ADMM) with lower function query complexity for solving nonconvex problems with multiple nonsmooth penalties. Moreover, we prove that our ZO-SPIDER-ADMM has the optimal function query complexity of @math for finding an @math -approximate local solution, where @math and @math denote the sample size and dimension of data, respectively. In particular, the ZO-SPIDER-ADMM improves the existing best nonconvex zeroth-order ADMM methods by a factor of @math . Moreover, we propose a fast online ZO-SPIDER-ADMM ( ZOO-SPIDER-ADMM). Our theoretical analysis shows that the ZOO-SPIDER-ADMM has the function query complexity of @math , which improves the existing best result by a factor of @math . Finally, we utilize a task of structured adversarial attack on black-box deep neural networks to demonstrate the efficiency of our algorithms.
So far, the above ADMM methods need to repeatedly calculate gradients of the loss function over the iterations. However, in many machine learning problems, the gradients of objective functions are difficult or infeasible to obtain. For example, in adversarial attack to black-box DNNs @cite_4 @cite_23 , only evaluation values (, function values) are provided. Thus, @cite_18 @cite_16 have proposed the zeroth-order online and stochastic ADMM methods for solving some convex problems. More recently, @cite_13 has proposed the nonconvex ZO-SVRG-ADMM and ZO-SAGA-ADMM methods. * -8pt
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_23", "@cite_16", "@cite_13" ], "mid": [ "2963304555", "2746600820", "2963243330", "2772692493", "2946968605" ], "abstract": [ "", "Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack (e.g., Carlini and Wagner's attack) and significantly outperforms existing black-box attacks via substitute models.", "As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying. This paper addresses these challenges by presenting: a) a comprehensive theoretical analysis of variance reduced zeroth-order (ZO) optimization, b) a novel variance reduced ZO algorithm, called ZO-SVRG, and c) an experimental evaluation of our approach in the context of two compelling applications, black-box chemical material classification and generation of adversarial examples from black-box deep neural network models. Our theoretical analysis uncovers an essential difficulty in the analysis of ZO-SVRG: the unbiased assumption on gradient estimates no longer holds. We prove that compared to its first-order counterpart, ZO-SVRG with a two-point random gradient estimator suffers an additional error of order O(1 b), where b the mini-batch size. To mitigate this error, we propose two accelerated versions of ZO-SVRG utilizing variance reduced gradient estimators, which achieve the best rate known for ZO stochastic optimization (in terms of iterations). Our extensive experimental results show that our approaches outperform other state-of-the-art ZO algorithms, and strike a balance between the convergence rate and the function query complexity.", "Designing algorithms for an optimization model often amounts to maintaining a balance between the degree of information to request from the model on the one hand, and the computational speed to expect on the other hand. Naturally, the more information is available, the faster one can expect the algorithm to converge. The popular algorithm of ADMM demands that objective function is easy to optimize once the coupled constraints are shifted to the objective with multipliers. However, in many applications this assumption does not hold; instead, often only some noisy estimations of the gradient of the objective—or even only the objective itself—are available. This paper aims to bridge this gap. We present a suite of variants of the ADMM, where the trade-offs between the required information on the objective and the computational complexity are explicitly given. The new variants allow the method to be applicable on a much broader class of problems where only noisy estimations of the gradient or the function values are accessible, yet the flexibility is achieved without sacrificing the computational complexity bounds.", "Alternating direction method of multipliers (ADMM) is a popular optimization tool for the composite and constrained problems in machine learning. However, in many machine learning problems such as black-box attacks and bandit feedback, ADMM could fail because the explicit gradients of these problems are difficult or infeasible to obtain. Zeroth-order (gradient-free) methods can effectively solve these problems due to that the objective function values are only required in the optimization. Recently, though there exist a few zeroth-order ADMM methods, they build on the convexity of objective function. Clearly, these existing zeroth-order methods are limited in many applications. In the paper, thus, we propose a class of fast zeroth-order stochastic ADMM methods (i.e., ZO-SVRG-ADMM and ZO-SAGA-ADMM) for solving nonconvex problems with multiple nonsmooth penalties, based on the coordinate smoothing gradient estimator. Moreover, we prove that both the ZO-SVRG-ADMM and ZO-SAGA-ADMM have convergence rate of @math , where @math denotes the number of iterations. In particular, our methods not only reach the best convergence rate @math for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints. Finally, we conduct the experiments of black-box binary classification and structured adversarial attack on black-box deep neural network to validate the efficiency of our algorithms." ] }
1907.13357
2965376428
We propose a new regularization technique, named Hybrid Spatio-Spectral Total Variation (HSSTV), for hyperspectral (HS) image denoising and compressed sensing. Regularization techniques based on total variation (TV) focus on local differences of an HS image to model its underlying smoothness and have been recognized as a popular approach to HS image restoration. However, existing TVs do not fully exploit underlying spectral correlation in their designs and or require a high computational cost in optimization. Our HSSTV is designed to simultaneously evaluates two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner with a balancing weight. This design resolves the said drawbacks of existing TVs. Then, we formulate HS image restoration as a constrained convex optimization problem involving HSSTV and develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for solving it. In the experiments, we illustrate the advantages of HSSTV over several state-of-the-art methods.
Yuan proposed HTV @cite_13 for HS image denoising. HTV can be seen as a generalization of the standard color TV @cite_21 , and its formulation is given as follows: where @math and @math are vertical and horizontal differences for @math th pixel of @math th band in an HS image, respectively. From this definition, one can see that HTV evaluates spatial piecewise smoothness but does not consider spectral correlation, resulting in spatial oversmoothing. This will be empirically shown in Sec.
{ "cite_N": [ "@cite_21", "@cite_13" ], "mid": [ "2014823423", "2039596145" ], "abstract": [ "We propose a regularization algorithm for color vectorial images which is fast, easy to code and mathematically well-posed. More precisely, the regularization model is based on the dual formulation of the vectorial Total Variation (VTV) norm and it may be regarded as the vectorial extension of the dual approach defined by Chambolle in [13] for gray-scale scalar images. The proposed model offers several advantages. First, it minimizes the exact VTV norm whereas standard approaches use a regularized norm. Then, the numerical scheme of minimization is straightforward to implement and finally, the number of iterations to reach the solution is low, which gives a fast regularization algorithm. Finally, and maybe more importantly, the proposed VTV minimization scheme can be easily extended to many standard applications. We apply this @math vectorial regularization algorithm to the following problems: color inverse scale space, color denoising with the chromaticity-brightness color representation, color image inpainting, color wavelet shrinkage, color image decomposition, color image deblurring, and color denoising on manifolds. Generally speaking, this VTV minimization scheme can be used in problems that required vector field (color, other feature vector) regularization while preserving discontinuities.", "The amount of noise included in a hyperspectral image limits its application and has a negative impact on hyperspectral image classification, unmixing, target detection, and so on. In hyperspectral images, because the noise intensity in different bands is different, to better suppress the noise in the high-noise-intensity bands and preserve the detailed information in the low-noise-intensity bands, the denoising strength should be adaptively adjusted with the noise intensity in the different bands. Meanwhile, in the same band, there exist different spatial property regions, such as homogeneous regions and edge or texture regions; to better reduce the noise in the homogeneous regions and preserve the edge and texture information, the denoising strength applied to pixels in different spatial property regions should also be different. Therefore, in this paper, we propose a hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction. To reduce the computational load in the denoising process, the split Bregman iteration algorithm is employed to optimize the spectral-spatial hyperspectral TV model and accelerate the speed of hyperspectral image denoising. A number of experiments illustrate that the proposed approach can satisfactorily realize the spectral-spatial adaptive mechanism in the denoising process, and superior denoising results are produced." ] }
1907.13357
2965376428
We propose a new regularization technique, named Hybrid Spatio-Spectral Total Variation (HSSTV), for hyperspectral (HS) image denoising and compressed sensing. Regularization techniques based on total variation (TV) focus on local differences of an HS image to model its underlying smoothness and have been recognized as a popular approach to HS image restoration. However, existing TVs do not fully exploit underlying spectral correlation in their designs and or require a high computational cost in optimization. Our HSSTV is designed to simultaneously evaluates two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner with a balancing weight. This design resolves the said drawbacks of existing TVs. Then, we formulate HS image restoration as a constrained convex optimization problem involving HSSTV and develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for solving it. In the experiments, we illustrate the advantages of HSSTV over several state-of-the-art methods.
Addesso proposed to use CTV @cite_57 for HS image inpainting @cite_61 . CTV is defined by It evaluates spatial piecewise smoothness by using @math norm. In addition, the method can also use the Schatten- @math norm as ) |_ S^p ^ q )^ 1 q . CTV can be seen as a generalization of HTV, which is equivalent to HTV when @math , @math and @math . In @cite_61 , the authors experimentally show that CTV with @math norm achieves the best performance, which means that the limitation of CTV in HS image restoration is the same as HTV.
{ "cite_N": [ "@cite_57", "@cite_61" ], "mid": [ "1960698473", "2759219183" ], "abstract": [ "Even after two decades, the total variation (TV) remains one of the most popular regularizations for image processing problems and has sparked a tremendous amount of research, particularly on moving from scalar to vector-valued functions. In this paper, we consider the gradient of a color image as a three-dimensional matrix or tensor with dimensions corresponding to the spatial extent, the intensity differences between neighboring pixels, and the spectral channels. The smoothness of this tensor is then measured by taking different norms along the different dimensions. Depending on the types of these norms, one obtains very different properties of the regularization, leading to novel models for color images. We call this class of regularizations collaborative total variation (CTV). On the theoretical side, we characterize the dual norm, the subdifferential, and the proximal mapping of the proposed regularizers. We further prove, with the help of the generalized concept of singular vectors, that an $ ^ ...", "Inpainting in hyperspectral imagery is a challenging research area and several methods have been recently developed to deal with this kind of data. In this paper we address missing data restoration via a convex optimization technique with regularization term based on Collaborative Total Variation (CTV). In particular we evaluate the effectiveness of several instances of CTV in conjunction with different dimensionality reduction algorithms." ] }
1907.13357
2965376428
We propose a new regularization technique, named Hybrid Spatio-Spectral Total Variation (HSSTV), for hyperspectral (HS) image denoising and compressed sensing. Regularization techniques based on total variation (TV) focus on local differences of an HS image to model its underlying smoothness and have been recognized as a popular approach to HS image restoration. However, existing TVs do not fully exploit underlying spectral correlation in their designs and or require a high computational cost in optimization. Our HSSTV is designed to simultaneously evaluates two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner with a balancing weight. This design resolves the said drawbacks of existing TVs. Then, we formulate HS image restoration as a constrained convex optimization problem involving HSSTV and develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for solving it. In the experiments, we illustrate the advantages of HSSTV over several state-of-the-art methods.
He proposed ASSTV @cite_58 for HS image denoising. ASSTV simultaneously evaluates direct spatial and spectral differences, which is defined by where @math , @math , and @math are vertical, horizontal, and spectral differences for the @math th pixel of an HS image, respectively, and @math , @math and @math are balancing parameters for each difference (Fig. , blue lines). Although the parameters play a very important role, their suitable values are changed for each HS image and noise intensity. Therefore, their settings are very difficult.
{ "cite_N": [ "@cite_58" ], "mid": [ "2790528326" ], "abstract": [ "Hyperspectral images (HSIs) are usually contaminated by various kinds of noise, such as stripes, deadlines, impulse noise, Gaussian noise, and so on, which significantly limits their subsequent application. In this paper, we model the stripes, deadlines, and impulse noise as sparse noise, and propose a unified mixed Gaussian noise and sparse noise removal framework named spatial–spectral total variation regularized local low-rank matrix recovery (LLRSSTV). The HSI is first divided into local overlapping patches, and rank-constrained low-rank matrix recovery is adopted to effectively separate the low-rank clean HSI patches from the sparse noise. Differing from the previous low-rank-based HSI denoising approaches, which process all the patches individually, a global spatial–spectral total variation regularized image reconstruction strategy is utilized to ensure the global spatial–spectral smoothness of the reconstructed image from the low-rank patches. In return, the globally reconstructed HSI further promotes the separation of the local low-rank components from the sparse noise. An augmented Lagrange multiplier method is adopted to solve the proposed LLRSSTV model, which simultaneously explores both the local low-rank property and the global spatial–spectral smoothness of the HSI. Both simulated and real HSI experiments were conducted to illustrate the advantage of the proposed method in HSI denoising, from visual quantitative evaluations and time cost." ] }
1907.13432
2966528571
We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectation-maximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectation-maximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flow-based neural network. In our first mixture model, we use multiple flow-based neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flow-based neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.
While GANs have high success in many applications, they are known to suffer in a mode dropping problem where a generator of a GAN is unable to capture all modes of an underlying probability distribution of data @cite_13 . To address diversity in data and model multiple modes in a distribution, variants of generative models have been developed and usage of multiple generators has been considered. For instance, methods of minibatch discrimination @cite_27 and feature representation @cite_10 are used to construct new discriminators of GANs which encourage the GANs to generate samples with diversity. Multiple Wasserstein GANs @cite_18 are used in @cite_13 with appropriate mutual information based regularization to encourage the diversity of samples generated by different GANs. A mixture GAN approach is proposed in @cite_26 using multiple generators and multi-classification solution to encourage diversity of samples. Multi-agent diverse GAN @cite_12 similarly employs @math generators, but uses a @math -class discriminator instead of a typical binary discriminator to increase the diversity of generated samples. These works are implicit probability distribution modeling and thus prior distribution of generators can not be inferred when multiple generators are used.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_26", "@cite_27", "@cite_10", "@cite_12" ], "mid": [ "2962698501", "2739748921", "2787223504", "2963373786", "2785967511", "2607448608" ], "abstract": [ "Real images often lie on a union of disjoint manifolds rather than one globally connected manifold, and this can cause several difficulties for the training of common Generative Adversarial Networks (GANs). In this work, we first show that single generator GANs are unable to correctly model a distribution supported on a disconnected manifold, and investigate how sample quality, mode collapse and local convergence are affected by this. Next, we show how using a collection of generators can address this problem, providing new insights into the success of such multi-generator GANs. Finally, we explain the serious issues caused by considering a fixed prior over the collection of generators and propose a novel approach for learning the prior and inferring the necessary number of generators without any supervision. Our proposed modifications can be applied on top of any other GAN model to enable learning of distributions supported on disconnected manifolds. We conduct several experiments to illustrate the aforementioned shortcoming of GANs, its consequences in practice, and the effectiveness of our proposed modifications in alleviating these issues.", "", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "Despite of the success of Generative Adversarial Networks (GANs) for image generation tasks, the trade-off between image diversity and visual quality are an well-known issue. Conventional techniques achieve either visual quality or image diversity; the improvement in one side is often the result of sacrificing the degradation in the other side. In this paper, we aim to achieve both simultaneously by improving the stability of training GANs. A key idea of the proposed approach is to implicitly regularizing the discriminator using a representative feature. For that, this representative feature is extracted from the data distribution, and then transferred to the discriminator for enforcing slow updates of the gradient. Consequently, the entire training process is stabilized because the learning curve of discriminator varies slowly. Based on extensive evaluation, we demonstrate that our approach improves the visual quality and diversity of state-of-the art GANs.", "This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting." ] }
1907.13432
2966528571
We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectation-maximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectation-maximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flow-based neural network. In our first mixture model, we use multiple flow-based neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flow-based neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.
Typically, for a GAN, the latent variable is assumed to follow a known and fixed distribution, e.g., Gaussian. The latent signal for a given data sample can not be obtained since generators which are usually based on neural networks are non-invertible. The mapping from a data sample to its corresponding latent signal is approximately estimated by neural networks in different ways. @cite_15 and @cite_33 propose to train a generative model and an inverse mapping (also neural network) from the data sample to the latent signal simultaneously, using the adversarial training method of GAN. Alternatively, @cite_8 proposes to approximately minimize a Kullback-Leibler divergence to estimate the mapping from the data sample to the latent variable, which leads to a nontrivial probability density ratio estimation problem.
{ "cite_N": [ "@cite_15", "@cite_33", "@cite_8" ], "mid": [ "2963265008", "", "2963476931" ], "abstract": [ "The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing generators learn to \"linearize semantics\" in the latent space of such models. Intuitively, such latent spaces may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.", "", "Implicit probabilistic models are a flexible class of models defined by a simulation process for data. They form the basis for models which encompass our understanding of the physical word. Despite this fundamental nature, the use of implicit models remains limited due to challenge in positing complex latent structure in them, and the ability to inference in such models with large data sets. In this paper, we first introduce the hierarchical implicit models (HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian modeling thereby defining models via simulators of data with rich hidden structure. Next, we develop likelihood-free variational inference (LFVI), a scalable variational inference algorithm for HIMs. Key to LFVI is specifying a variational family that is also implicit. This matches the model's flexibility and allows for accurate approximation of the posterior. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for symbol generation." ] }
1907.13418
2965317766
Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, little consideration has been given to uncertainty quantification over the output image. Here we introduce methods to characterise different components of uncertainty in such problems and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for @math uncertainty through a heteroscedastic noise model and for @math uncertainty through approximate Bayesian inference, and integrate the two to quantify @math uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images---DTIs and Mean Apparent Propagator MRI---and their derived quantities such as MD and FA, on multiple datasets of both healthy and pathological human brains. Results highlight three key benefits of uncertainty modelling for improving the safety of DL-based image enhancement systems. Firstly, incorporating uncertainty improves the predictive performance even when test data departs from training data. Secondly, the predictive uncertainty highly correlates with errors, and is therefore capable of detecting predictive "failures". Results demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the output images. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level "explanations" for the performance by quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples.
However, within the context of medical image enhancement, these lines of research performed only limited validation of the quality and utility of uncertainty modelling. In this work, we formalise and extend the preliminary ideas in Tanno @cite_81 and provide a comprehensive set of experiments to evaluate the proposed uncertainty modelling techniques in a diverse set of datasets, which vary in demographics, scanner types, acquisition protocols or pathology. Moreover, with the exception of @cite_81 , none of the previous methods model different components of uncertainty, namely intrinsic and parameter uncertainty. Our method accounts for both, and provides conclusive evidence that this improves performance thanks to different regularisation effects. In addition, we propose a method to decompose predictive uncertainty over an arbitrary function of the output image (e.g. morphological measurements) into its sources, in order to provide a high-level explanation of model performance on the given input.
{ "cite_N": [ "@cite_81" ], "mid": [ "2610571781" ], "abstract": [ "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use." ] }
1907.13496
2965224253
Techniques from computational topology, in particular persistent homology, are becoming increasingly relevant for data analysis. Their stable metrics permit the use of many distance-based data analysis methods, such as multidimensional scaling, while providing a firm theoretical ground. Many modern machine learning algorithms, however, are based on kernels. This paper presents persistence indicator functions (PIFs), which summarize persistence diagrams, i.e., feature descriptors in topological data analysis. PIFs can be calculated and compared in linear time and have many beneficial properties, such as the availability of a kernel-based similarity measure. We demonstrate their usage in common data analysis scenarios, such as confidence set estimation and classification of complex structured data.
Recognizing that persistence diagrams can be analyzed at multiple scales as well in order to facilitate hierarchical comparisons, there are some approaches that provide approximations to persistence diagrams based on, e.g., a smoothing parameter. Among these, the stable kernel of @cite_21 is particularly suited for topological machine learning. Another approach by @cite_9 transforms a persistence diagram into a finite-dimensional vector by means of a probability distribution. Both methods require choosing a set of parameters (for kernel computations), while PIF are fully parameter-free. Moreover, PIF also permit other applications, such as mean calculations and statistical hypothesis testing, which pure kernel methods cannot provide.
{ "cite_N": [ "@cite_9", "@cite_21" ], "mid": [ "2964237352", "1960384938" ], "abstract": [ "Many data sets can be viewed as a noisy sampling of an underlying space, and tools from topological data analysis can characterize this structure for the purpose of knowledge discovery. One such tool is persistent homology, which provides a multiscale description of the homological features within a data set. A useful representation of this homological information is a persistence diagram (PD). Efforts have been made to map PDs into spaces with additional structure valuable to machine learning tasks. We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs. The discriminatory power of PIs is compared against existing methods, showing significant performance gains. We explore the use of PIs with vector-based machine learning tools, such as linear sparse support vector machines, which identify features containing discriminating topological information. Finally, high accuracy inference of parameter values from the dynamic output of a discrete dynamical system (the linked twist map) and a partial differential equation (the anisotropic Kuramoto-Sivashinsky equation) provide a novel application of the discriminatory power of PIs.", "Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes." ] }
1907.13496
2965224253
Techniques from computational topology, in particular persistent homology, are becoming increasingly relevant for data analysis. Their stable metrics permit the use of many distance-based data analysis methods, such as multidimensional scaling, while providing a firm theoretical ground. Many modern machine learning algorithms, however, are based on kernels. This paper presents persistence indicator functions (PIFs), which summarize persistence diagrams, i.e., feature descriptors in topological data analysis. PIFs can be calculated and compared in linear time and have many beneficial properties, such as the availability of a kernel-based similarity measure. We demonstrate their usage in common data analysis scenarios, such as confidence set estimation and classification of complex structured data.
Recently, Bubenik @cite_16 introduced , a functional summary of persistence diagrams. Within his framework, PIF can be considered to represent a summary (or projection) of the . Our definition of PIF is more straightforward and easier to implement, however. Since PIF share several properties of persistence landscapes---most importantly the existence of simple function-space distance measures---this paper uses similar experimental setups as Bubenik @cite_16 and @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "2963265143", "2149185044" ], "abstract": [ "Persistent homology probes topological properties from point clouds and functions. By looking at multiple scales simultaneously, one can record the births and deaths of topological features as the scale varies. In this paper we use a statistical technique, the empirical bootstrap, to separate topological signal from topological noise. In particular, we derive condence sets for persistence diagrams and condence bands for persistence landscapes.", "We define a new topological summary for data that we call the persistence landscape. Since this summary lies in a vector space, it is easy to combine with tools from statistics and machine learning, in contrast to the standard topological summaries. Viewed as a random variable with values in a Banach space, this summary obeys a strong law of large numbers and a central limit theorem. We show how a number of standard statistical tests can be used for statistical inference using this summary. We also prove that this summary is stable and that it can be used to provide lower bounds for the bottleneck and Wasserstein distances." ] }
1907.13286
2964427276
Recommender systems are known to suffer from the popularity bias problem: popular (i.e. frequently rated) items get a lot of exposure while less popular ones are under-represented in the recommendations. Research in this area has been mainly focusing on finding ways to tackle this issue by increasing the number of recommended long-tail items or otherwise the overall catalog coverage. In this paper, however, we look at this problem from the users' perspective: we want to see how popularity bias causes the recommendations to deviate from what the user expects to get from the recommender system. We define three different groups of users according to their interest in popular items (Niche, Diverse and Blockbuster-focused) and show the impact of popularity bias on the users in each group. Our experimental results on a movie dataset show that in many recommendation algorithms the recommendations the users get are extremely concentrated on popular items even if a user is interested in long-tail and non-popular items showing an extreme bias disparity.
And finally, @cite_12 compared different recommendation algorithms in terms of accuracy and popularity bias. In that paper they observed some algorithms concentrate more on popular items than the others. In our work, we are mainly interested in seeing the popularity bias from the users' expectations perspective.
{ "cite_N": [ "@cite_12" ], "mid": [ "1224564842" ], "abstract": [ "Most real-world recommender systems are deployed in a commercial context or designed to represent a value-adding service, e.g., on shopping or Social Web platforms, and typical success indicators for such systems include conversion rates, customer loyalty or sales numbers. In academic research, in contrast, the evaluation and comparison of different recommendation algorithms is mostly based on offline experimental designs and accuracy or rank measures which are used as proxies to assess an algorithm's recommendation quality. In this paper, we show that popular recommendation techniques--despite often being similar when compared with the help of accuracy measures--can be quite different with respect to which items they recommend. We report the results of an in-depth analysis in which we compare several recommendations strategies from different perspectives, including accuracy, catalog coverage and their bias to recommend popular items. Our analyses reveal that some recent techniques that perform well with respect to accuracy measures focus their recommendations on a tiny fraction of the item spectrum or recommend mostly top sellers. We analyze the reasons for some of these biases in terms of algorithmic design and parameterization and show how the characteristics of the recommendations can be altered by hyperparameter tuning. Finally, we propose two novel algorithmic schemes to counter these popularity biases." ] }
1907.13351
2964404395
Synthesizing geometrical shapes from human brain activities is an interesting and meaningful but very challenging topic. Recently, the advancements of deep generative models like Generative Adversarial Networks (GANs) have supported the object generation from neurological signals. However, the Electroencephalograph (EEG)-based shape generation still suffer from the low realism problem. In particular, the generated geometrical shapes lack clear edges and fail to contain necessary details. In light of this, we propose a novel multi-task generative adversarial network to convert the individual's EEG signals evoked by geometrical shapes to the original geometry. First, we adopt a Convolutional Neural Network (CNN) to learn highly informative latent representation for the raw EEG signals, which is vital for the subsequent shape reconstruction. Next, we build the discriminator based on multi-task learning to distinguish and classify fake samples simultaneously, where the mutual promotion between different tasks improves the quality of the recovered shapes. Then, we propose a semantic alignment constraint in order to force the synthesized samples to approach the real ones in pixel-level, thus producing more compelling shapes. The proposed approach is evaluated over a local dataset and the results show that our model outperforms the competitive state-of-the-art baselines.
Recent years' research in neuroscience and neuroimaging @cite_11 indicated that human perception of visual stimuli can be decoded through some techniques in neuroimaging. To be specific, a few works gave evidence about decoding the brain signals to human activity by using the Functional Magnetic Resonance Imaging (fMRI) and EEG. There are some works use the fMRI signals to reconstruct the image which is seen by the individual and get an acceptable performance @cite_2 @cite_10 . The studies show the potential of fMRI-based image reconstruction in the brain signals decoding area, however, fMRI faces a number of crucial issues such as expensive acquisition equipment and low portability. Apart from the fMRI based method, there are a few EEG based methods in image reconstruction as EEG signals are less expensive @cite_3 @cite_4 . As a typical investigation, Brain2image @cite_3 encoded the raw EEG signals into a latent space which contains the distinctive information, and then sent them to a Conditional Generative Adversarial Networks (CGAN) for image reconstruction. @cite_4 applied a very similar algorithm framework.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2776446922", "2766524960", "2126810579", "2112180451", "2041853331" ], "abstract": [ "Recent advancements in generative adversarial networks (GANs), using deep convolutional models, have supported the development of image generation techniques able to reach satisfactory levels of realism. Further improvements have been proposed to condition GANs to generate images matching a specific object category or a short text description. In this work, we build on the latter class of approaches and investigate the possibility of driving and conditioning the image generation process by means of brain signals recorded, through an electroencephalograph (EEG), while users look at images from a set of 40 ImageNet object categories with the objective of generating the seen images. To accomplish this task, we first demonstrate that brain activity EEG signals encode visually-related information that allows us to accurately discriminate between visual object categories and, accordingly, we extract a more compact class-dependent representation of EEG data using recurrent neural networks. Afterwards, we use the learned EEG manifold to condition image generation employing GANs, which, during inference, will read EEG signals and convert them into images. We tested our generative approach using EEG signals recorded from six subjects while looking at images of the aforementioned 40 visual classes. The results show that for classes represented by well-defined visual patterns (e.g., pandas, airplane, etc.), the generated images are realistic and highly resemble those evoking the EEG signals used for conditioning GANs, resulting in an actual reading-the-mind process.", "Reading the human mind has been a hot topic in the last decades, and recent research in neuroscience has found evidence on the possibility of decoding, from neuroimaging data, how the human brain works. At the same time, the recent rediscovery of deep learning combined to the large interest of scientific community on generative methods has enabled the generation of realistic images by learning a data distribution from noise. The quality of generated images increases when the input data conveys information on visual content of images. Leveraging on these recent trends, in this paper we present an approach for generating images using visually-evoked brain signals recorded through an electroencephalograph (EEG). More specifically, we recorded EEG data from several subjects while observing images on a screen and tried to regenerate the seen images. To achieve this goal, we developed a deep-learning framework consisting of an LSTM stacked with a generative method, which learns a more compact and noise-free representation of EEG data and employs it to generate the visual stimuli evoking specific brain responses. OurBrain2Image approach was trained and tested using EEG data from six subjects while they were looking at images from 40 ImageNet classes. As generative models, we compared variational autoencoders (VAE) and generative adversarial networks (GAN). The results show that, indeed, our approach is able to generate an image drawn from the same distribution of the shown images. Furthermore, GAN, despite generating less realistic images, show better performance than VAE, especially as concern sharpness. The obtained performance provides useful hints on the fact that EEG contains patterns related to visual content and that such patterns can be used to effectively generate images that are semantically coherent to the evoking visual stimuli.", "Summary Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3–5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6–8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.", "Summary Recent studies have used fMRI signals from early visual areas to reconstruct simple geometric patterns. Here, we demonstrate a new Bayesian decoder that uses fMRI signals from early and anterior visual areas to reconstruct complex natural images. Our decoder combines three elements: a structural encoding model that characterizes responses in early visual areas, a semantic encoding model that characterizes responses in anterior visual areas, and prior information about the structure and semantic content of natural images. By combining all these elements, the decoder produces reconstructions that accurately reflect both the spatial structure and semantic category of the objects contained in the observed natural image. Our results show that prior information has a substantial effect on the quality of natural image reconstructions. We also demonstrate that much of the variance in the responses of anterior visual areas to complex natural images is explained by the semantic category of the image alone.", "Recent advances in human neuroimaging have shown that it is possible to accurately decode a person's conscious experience based only on non-invasive measurements of their brain activity. Such 'brain reading' has mostly been studied in the domain of visual perception, where it helps reveal the way in which individual experiences are encoded in the human brain. The same approach can also be extended to other types of mental state, such as covert attitudes and lie detection. Such applications raise important ethical issues concerning the privacy of personal thought." ] }
1907.13351
2964404395
Synthesizing geometrical shapes from human brain activities is an interesting and meaningful but very challenging topic. Recently, the advancements of deep generative models like Generative Adversarial Networks (GANs) have supported the object generation from neurological signals. However, the Electroencephalograph (EEG)-based shape generation still suffer from the low realism problem. In particular, the generated geometrical shapes lack clear edges and fail to contain necessary details. In light of this, we propose a novel multi-task generative adversarial network to convert the individual's EEG signals evoked by geometrical shapes to the original geometry. First, we adopt a Convolutional Neural Network (CNN) to learn highly informative latent representation for the raw EEG signals, which is vital for the subsequent shape reconstruction. Next, we build the discriminator based on multi-task learning to distinguish and classify fake samples simultaneously, where the mutual promotion between different tasks improves the quality of the recovered shapes. Then, we propose a semantic alignment constraint in order to force the synthesized samples to approach the real ones in pixel-level, thus producing more compelling shapes. The proposed approach is evaluated over a local dataset and the results show that our model outperforms the competitive state-of-the-art baselines.
Most of the visual object reconstruction methods are based on Generative Adversarial Networks (GANs) and the variations. GANs @cite_5 , as the typical deep learning frameworks, was used widely in image generation. The standard GANs are composed of a generator network which generates images from the random sampled noise and a discriminator network which tried to distinguish the generated image correctly. Normally, original GANs had to suffer from the uncontrollable issue of the generation process. In order to retard it, the conditional GAN (CGAN) was proposed @cite_1 which involves the conditional information (e.g., labels) in order to control the generating process. Auxiliary Classifier GAN (ACGAN) @cite_6 improve the performance of GAN for image synthesis. ACGAN demonstrated that adding more structure to the GAN latent space along with a specialized cost function results in higher quality samples. A task-specific branch in the discriminator is empowered to enhance the discriminability.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_6" ], "mid": [ "", "2125389028", "2548275288" ], "abstract": [ "", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data." ] }
1907.13351
2964404395
Synthesizing geometrical shapes from human brain activities is an interesting and meaningful but very challenging topic. Recently, the advancements of deep generative models like Generative Adversarial Networks (GANs) have supported the object generation from neurological signals. However, the Electroencephalograph (EEG)-based shape generation still suffer from the low realism problem. In particular, the generated geometrical shapes lack clear edges and fail to contain necessary details. In light of this, we propose a novel multi-task generative adversarial network to convert the individual's EEG signals evoked by geometrical shapes to the original geometry. First, we adopt a Convolutional Neural Network (CNN) to learn highly informative latent representation for the raw EEG signals, which is vital for the subsequent shape reconstruction. Next, we build the discriminator based on multi-task learning to distinguish and classify fake samples simultaneously, where the mutual promotion between different tasks improves the quality of the recovered shapes. Then, we propose a semantic alignment constraint in order to force the synthesized samples to approach the real ones in pixel-level, thus producing more compelling shapes. The proposed approach is evaluated over a local dataset and the results show that our model outperforms the competitive state-of-the-art baselines.
Most brain signal based image reconstruction work is based on fMRI. Due to the drawbacks of fMRI (e.g., low time resolution, expensive, and low portability), we focus on EEG based geometric shape reconstruction. Compare to the typical EEG-based work like brain2image @cite_3 , we have several technical advantages: 1) we concentrate on the influence to the EEG signals brought by geometric attribute while @cite_3 focus on images with a large number of attributes; 2) we adopt CNN instead of RNN to learn the latent EEG features which cost less training time with a similar accuracy; 3) we add an auxiliary task-specific classifier to improve the discriminability of the discriminator; 4) we propose a semantic alignment method to generate more realistic images.
{ "cite_N": [ "@cite_3" ], "mid": [ "2766524960" ], "abstract": [ "Reading the human mind has been a hot topic in the last decades, and recent research in neuroscience has found evidence on the possibility of decoding, from neuroimaging data, how the human brain works. At the same time, the recent rediscovery of deep learning combined to the large interest of scientific community on generative methods has enabled the generation of realistic images by learning a data distribution from noise. The quality of generated images increases when the input data conveys information on visual content of images. Leveraging on these recent trends, in this paper we present an approach for generating images using visually-evoked brain signals recorded through an electroencephalograph (EEG). More specifically, we recorded EEG data from several subjects while observing images on a screen and tried to regenerate the seen images. To achieve this goal, we developed a deep-learning framework consisting of an LSTM stacked with a generative method, which learns a more compact and noise-free representation of EEG data and employs it to generate the visual stimuli evoking specific brain responses. OurBrain2Image approach was trained and tested using EEG data from six subjects while they were looking at images from 40 ImageNet classes. As generative models, we compared variational autoencoders (VAE) and generative adversarial networks (GAN). The results show that, indeed, our approach is able to generate an image drawn from the same distribution of the shown images. Furthermore, GAN, despite generating less realistic images, show better performance than VAE, especially as concern sharpness. The obtained performance provides useful hints on the fact that EEG contains patterns related to visual content and that such patterns can be used to effectively generate images that are semantically coherent to the evoking visual stimuli." ] }
1907.12924
2966152490
Service robots are expected to operate effectively in human-centric environments for long periods of time. In such realistic scenarios, fine-grained object categorization is as important as basic-level object categorization. We tackle this problem by proposing an open-ended object recognition approach which concurrently learns both the object categories and the local features for encoding objects. In this work, each object is represented using a set of general latent visual topics and category-specific dictionaries. The general topics encode the common patterns of all categories, while the category-specific dictionary describes the content of each category in details. The proposed approach discovers both sets of general and specific representations in an unsupervised fashion and updates them incrementally using new object views. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability and object classification performance. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. Regarding computation time, the best result was obtained with a Bag-of-Words method followed by a variant of the Latent Dirichlet Allocation approach.
In the last decade, various research groups have made substantial progress towards the development of learning approaches which support online and incremental object category learning @cite_2 @cite_17 . In recent studies on object recognition, much attention has been given to deep Convolutional Neural Networks (CNNs). It is now clear that if in a scenario, we have and , CNN-based approaches yield good results, notable recent works include @cite_0 @cite_15 . In open-ended scenarios, these assumptions are not satisfied, and the robot needs to learn new concepts on-site using very few training examples. While deep learning is a very powerful and useful tool, there are several limitations to apply CNNs in open-ended domains. In general, CNN approaches are incremental by nature but not open-ended, since the inclusion of new categories enforces a restructuring in the topology of the network. Furthermore, training a CNN-based approach requires long training times and training with a few examples per category poses a challenge for these methods. In contrast, @cite_13 @cite_16 allows for concurrent learning and recognition. Our approach falls into this category.
{ "cite_N": [ "@cite_0", "@cite_2", "@cite_15", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2963037989", "1884730573", "2963078860", "2008213039", "2556865614", "2619545562" ], "abstract": [ "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "This paper describes a 3D object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario. This system, developed within the scope of the European project RACE, integrates detection, tracking, learning and recognition of tabletop objects. Interaction capabilities were also developed to enable a human user to take the role of instructor and teach new object categories. Thus, the system learns in an incremental and open-ended way from user-mediated experiences. Based on the analysis of memory requirements for storing both semantic and perceptual data, a dual memory approach, comprising a semantic memory and a perceptual memory, was adopted. The perceptual memory is the central data structure of the described perception and learning system. The goal of this paper is twofold: on one hand, we provide a thorough description of the developed system, starting with motivations, cognitive considerations and architecture design, then providing details on the developed modules, and finally presenting a detailed evaluation of the system; on the other hand, we emphasize the crucial importance of the Point Cloud Library (PCL) for developing such system.11This paper is a revised and extended version of Oliveira et?al. (2014). We describe an object perception and perceptual learning system.The system is able to detect, track and recognize tabletop objects.The system learns novel object categories in an open-ended fashion.The Point Cloud Library is used in nearly all modules of the system.The system was developed and used in the European project RACE.", "Low-shot visual learning–the ability to recognize novel object categories from very few examples–is a hallmark of human visual intelligence. Existing machine learning approaches fail to generalize in the same way. To make progress on this foundational problem, we present a low-shot learning benchmark on complex images that mimics challenges faced by recognition systems in the wild. We then propose (1) representation regularization techniques, and (2) techniques to hallucinate additional training examples for data-starved classes. Together, our methods improve the effectiveness of convolutional networks in low-shot learning, improving the one-shot accuracy on novel classes by 2.3× on the challenging ImageNet dataset.", "3D object detection and recognition is increasingly used for manipulation and navigation tasks in service robots. It involves segmenting the objects present in a scene, estimating a feature descriptor for the object view and, finally, recognizing the object view by comparing it to the known object categories. This paper presents an efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner. In this paper, “open-ended” implies that the set of object categories to be learned is not known in advance. The training instances are extracted from on-line experiences of a robot, and thus become gradually available over time, rather than at the beginning of the learning process. This paper focuses on two state-of-the-art questions: (1) How to automatically detect, conceptualize and recognize objects in 3D scenes in an open-ended manner? (2) How to acquire and use high-level knowledge obtained from the interaction with human users, namely when they provide category labels, in order to improve the system performance? This approach starts with a pre-processing step to remove irrelevant data and prepare a suitable point cloud for the subsequent processing. Clustering is then applied to detect object candidates, and object views are described based on a 3D shape descriptor called spin-image. Finally, a nearest-neighbor classification rule is used to predict the categories of the detected objects. A leave-one-out cross validation algorithm is used to compute precision and recall, in a classical off-line evaluation setting, for different system parameters. Also, an on-line evaluation protocol is used to assess the performance of the system in an open-ended setting. Results show that the proposed system is able to interact with human users, learning new object categories continuously over time.", "Most robots lack the ability to learn new objects from past experiences. To migrate a robot to a new environment one must often completely re-generate the knowledge- base that it is running with. Since in open-ended domains the set of categories to be learned is not predefined, it is not feasible to assume that one can pre-program all object categories required by robots. Therefore, autonomous robots must have the ability to continuously execute learning and recognition in a concurrent and interleaved fashion. This paper proposes an open-ended 3D object recognition system which concurrently learns both the object categories and the statistical features for encoding objects. In particular, we propose an extension of Latent Dirichlet Allocation to learn structural semantic features (i.e. topics) from low-level feature co-occurrences for each category independently. Moreover, topics in each category are discovered in an unsupervised fashion and are updated incrementally using new object views. The approach contains similarities with the organization of the visual cortex and builds a hierarchy of increasingly sophisticated representations. Results show the fulfilling performance of this approach on different types of objects. Moreover, this system demonstrates the capability of learning from few training examples and competes with state-of-the-art systems.", "Autonomous robots that are to assist humans in their daily lives must recognize and understand the meaning of objects in their environment. However, the open nature of the world means robots must be able to learn and extend their knowledge about previously unknown objects on-line. In this work we investigate the problem of unknown object hypotheses generation, and employ a semantic web-mining framework along with deep-learning-based object detectors. This allows us to make use of both visual and semantic features in combined hypotheses generation. Experiments on data from mobile robots in real world application deployments show that this combination improves performance over the use of either method in isolation." ] }
1907.12924
2966152490
Service robots are expected to operate effectively in human-centric environments for long periods of time. In such realistic scenarios, fine-grained object categorization is as important as basic-level object categorization. We tackle this problem by proposing an open-ended object recognition approach which concurrently learns both the object categories and the local features for encoding objects. In this work, each object is represented using a set of general latent visual topics and category-specific dictionaries. The general topics encode the common patterns of all categories, while the category-specific dictionary describes the content of each category in details. The proposed approach discovers both sets of general and specific representations in an unsupervised fashion and updates them incrementally using new object views. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability and object classification performance. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. Regarding computation time, the best result was obtained with a Bag-of-Words method followed by a variant of the Latent Dirichlet Allocation approach.
Several types of research have been performed to assess the added-value of structural information. @cite_14 extended an online version of Latent Dirichlet Allocation (LDA) and proposed an incremental Gibbs sampler for LDA (here referred to as I-LDA). In online-LDA and I-LDA, the number of categories is fixed, while in our approach the number of categories is growing. @cite_13 proposed an open-ended object category learning approach just by learning specific topics per category, while our approach does not only learn a set of general topics for basic-level categorization, but also learn a category-specific dictionary for fine-grained categorization.
{ "cite_N": [ "@cite_14", "@cite_13" ], "mid": [ "159230833", "2556865614" ], "abstract": [ "Inference algorithms for topic models are typically designed to be run over an entire collection of documents after they have been observed. However, in many applications of these models, the collection grows over time, making it infeasible to run batch algorithms repeatedly. This problem can be addressed by using online algorithms, which update estimates of the topics as each document is observed. We introduce two related RaoBlackwellized online inference algorithms for the latent Dirichlet allocation (LDA) model – incremental Gibbs samplers and particle filters – and compare their runtime and performance to that of existing algorithms.", "Most robots lack the ability to learn new objects from past experiences. To migrate a robot to a new environment one must often completely re-generate the knowledge- base that it is running with. Since in open-ended domains the set of categories to be learned is not predefined, it is not feasible to assume that one can pre-program all object categories required by robots. Therefore, autonomous robots must have the ability to continuously execute learning and recognition in a concurrent and interleaved fashion. This paper proposes an open-ended 3D object recognition system which concurrently learns both the object categories and the statistical features for encoding objects. In particular, we propose an extension of Latent Dirichlet Allocation to learn structural semantic features (i.e. topics) from low-level feature co-occurrences for each category independently. Moreover, topics in each category are discovered in an unsupervised fashion and are updated incrementally using new object views. The approach contains similarities with the organization of the visual cortex and builds a hierarchy of increasingly sophisticated representations. Results show the fulfilling performance of this approach on different types of objects. Moreover, this system demonstrates the capability of learning from few training examples and competes with state-of-the-art systems." ] }
1907.12924
2966152490
Service robots are expected to operate effectively in human-centric environments for long periods of time. In such realistic scenarios, fine-grained object categorization is as important as basic-level object categorization. We tackle this problem by proposing an open-ended object recognition approach which concurrently learns both the object categories and the local features for encoding objects. In this work, each object is represented using a set of general latent visual topics and category-specific dictionaries. The general topics encode the common patterns of all categories, while the category-specific dictionary describes the content of each category in details. The proposed approach discovers both sets of general and specific representations in an unsupervised fashion and updates them incrementally using new object views. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability and object classification performance. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. Regarding computation time, the best result was obtained with a Bag-of-Words method followed by a variant of the Latent Dirichlet Allocation approach.
Assume at time @math (i.e., first teaching action) a dictionary is learned for category @math , denoted as @math , which represents the distribution of 3D shape features observed up to time @math . Later at time @math , a new training instance, which is represented as a set of spin-images, is taught by a teacher to category @math (i.e., supervised learning). The teaching instruction trigs the robot to retrieve the current dictionary of the category as well as the representation of the new object view and updates the relevant dictionary using an incremental K-means algorithm @cite_18 (i.e. unsupervised learning). Such category-specific dictionary would highlight the differences of objects from different categories, and as a consequence improves the object recognition performance.
{ "cite_N": [ "@cite_18" ], "mid": [ "2128326150" ], "abstract": [ "Study of this paper describes the incremental behaviours of partitioning based K-means clustering. This incremental clustering is designed using the cluster’s metadata captured from the K-Means results. Experimental studies shows that this clustering outperformed when the number of clusters increased, number of objects increased, length of the cluster radius decreased, while the incremental clustering outperformed when the number of new data objects are inserted into the existing database. In incremental approach, the K-means clustering algorithm is applied to a dynamic database where the data may be frequently updated. And this approach measure the new cluster centers by directly computes the new data from the means of the existing clusters instead of rerunning the K-means algorithm. Thus it describes, at what percent of delta change in the original database up to which incremental K-means clustering behaves better than actual K-means. It can be also used for large multidimensional dataset." ] }
1907.13025
2964650603
Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.
@cite_33 @cite_49 present a skeleton representation to represent both spatial configuration and dynamics of joint trajectories into three texture images through color encoding, named Joint Trajectory Maps (JTMs). The authors apply rotations to the skeleton data to mimicking multi-views and also for data enlargement to overcome the drawback of CNNs usually being not view invariant. JTMs are generated by projecting the trajectories onto the three orthogonal planes. To encode motion direction in the JTM, they use a hue colormap function to color'' the joint trajectories over the action period. They also encode the motion magnitude of joints into saturation and brightness claiming that changes in motion results in texture in the JMTs. Finally, the authors individually fine-tune three AlexNet @cite_26 CNNs (one for each JTM) to perform classification.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_49" ], "mid": [ "2163605009", "2526041356", "" ], "abstract": [ "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Recently, Convolutional Neural Networks (ConvNets) have shown promising performances in many computer vision tasks, especially image-based recognition. How to effectively use ConvNets for video-based recognition is still an open problem. In this paper, we propose a compact, effective yet simple method to encode spatio-temporal information carried in 3D skeleton sequences into multiple 2D images, referred to as Joint Trajectory Maps (JTM), and ConvNets are adopted to exploit the discriminative features for real-time human action recognition. The proposed method has been evaluated on three public benchmarks, i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal human action dataset (UTD-MHAD) and achieved the state-of-the-art results.", "" ] }
1907.13025
2964650603
Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.
To overcome the problem of the sparse data generated by skeleton sequence video, @cite_23 represent the temporal dynamics of the skeleton sequence by generating four skeleton representation images. Their approach is closer to @cite_47 method, however they compute the relative positions of the joints to four reference joints by arranging them as a chain and concatenating the joints of each body part to the reference joints resulting onto four different skeleton representations. According to the authors, such structure incorporate different spatial relationships between the joints. Finally, the skeleton images are resized and each channel of the four representations is used as input to a VGG19 @cite_30 pre-trained architecture for feature extraction.
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_23" ], "mid": [ "2962835968", "", "2604321021" ], "abstract": [ "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition." ] }
1907.13025
2964650603
Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.
@cite_46 claim that the concatenation process of chaining all joints with a fixed order turn into lack of semantic meaning and leads to loss in skeleton structural information. To that end, @cite_46 proposed a representation named Tree Structure Skeleton Image (TSSI) to preserve spatial relations. Their method is created by traversing a skeleton tree with a depth-first order algorithm with the premise that the fewer edges there are, the more relevant the joint pair is. The generated representation are then quantified into an image and resized before being sent to a ResNet-50 @cite_39 CNN architecture.
{ "cite_N": [ "@cite_46", "@cite_39" ], "mid": [ "2797382244", "2949650786" ], "abstract": [ "Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio–temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN-based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton-based action recognition because different joints contributed unequally toward the correct recognition. To solve these two problems, we proposed a novel CNN-based method for skeleton-based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio–temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a global long-sequence attention network with refined branch structures. Furthermore, in order to adjust the kernel’s spatio–temporal aspect ratios and better capture long-term dependencies, we proposed a sub-sequence attention network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D data set and the SBU kinetic interaction data set outperformed the state of the art. The model was further validated on noisy estimated poses from the subsets of the UCF101 data set and the kinetics data set.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
Our ultimate goal is to formally ensure safety for applications that are based on Artificial Intelligence (AI), as described by @cite_4 . In particular, the potential impact of intelligent systems performing tasks in society and how safety guarantees are necessary to prevent damages are the main problem of safety in ANNs.
{ "cite_N": [ "@cite_4" ], "mid": [ "2462906003" ], "abstract": [ "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
@cite_34 and @cite_32 have shown how weak ANNs can be if small noises are present in their inputs. They described and evaluated testing and verification approaches based on covering methods and images proximity @cite_34 and how adversarial cases are obtained @cite_32 . In particular, our study resembles that of and @cite_32 @cite_34 to obtain adversarial cases. Here, if any property is violated, then a counterexample is provided; in cases of safety properties, adversarial examples will be generated via counterexample using ESBMC-GPU. In contrast to @cite_32 , we do not focus on generating noise in specific regions, but in every image pixel. Our approach in images proximity is influenced by @cite_34 but we use incremental BMC instead of concolic testing as our verification engine. Our symbolic verification method checks safety properties on non-deterministic images with a certain distance of a given image; both image and distance are determined by the user. @cite_2 also describe an approach to validate ANNs using symbolic execution by translating a NN into an imperative program. By contrast, we consider the actual implementation of ANN in CUDA and apply incremental BMC using off-the-shelf SMT solvers.
{ "cite_N": [ "@cite_34", "@cite_32", "@cite_2" ], "mid": [ "2793633339", "2543296129", "2884199749" ], "abstract": [ "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety-critical domains. However, traditional software test coverage metrics cannot be applied directly to DNNs. In this paper, inspired by the MC DC coverage criterion, we propose a family of four novel test criteria that are tailored to structural features of DNNs and their semantics. We validate the criteria by demonstrating that the generated test inputs guided via our proposed coverage criteria are able to capture undesired behaviours in a DNN. Test cases are generated using a symbolic approach and a gradient-based heuristic search. By comparing them with existing methods, we show that our criteria achieve a balance between their ability to find bugs (proxied using adversarial examples) and the computational cost of test case generation. Our experiments are conducted on state-of-the-art DNNs obtained using popular open source datasets, including MNIST, CIFAR-10 and ImageNet.", "Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.", "Deep Neural Networks (DNN) are increasingly used in a variety of applications, many of them with substantial safety and security concerns. This paper introduces DeepCheck, a new approach for validating DNNs based on core ideas from program analysis, specifically from symbolic execution. The idea is to translate a DNN into an imperative program, thereby enabling program analysis to assist with DNN validation. A basic translation however creates programs that are very complex to analyze. DeepCheck introduces novel techniques for lightweight symbolic analysis of DNNs and applies them in the context of image classification to address two challenging problems in DNN analysis: 1) identification of important pixels (for attribution and adversarial generation); and 2) creation of 1-pixel and 2-pixel attacks. Experimental results using the MNIST data-set show that DeepCheck's lightweight symbolic analysis provides a valuable tool for DNN validation." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
@cite_0 presented formal techniques to extract invariants from the decision logic of ANNs. These invariants represent pre- and post-conditions, which hold when transformations of a certain type are applied to ANNs. The authors have proposed two techniques. The first one is called iterative relaxation of decision patterns, which uses Reluplex as the decision procedures @cite_9 . The second one is called decision-tree based invariant generation, which resembles covering methods @cite_34 . Robustness and explainability are the core properties of this study. Applying those properties to ANNs have shown impressive experimental results. Explainability showed an important property to evaluate safety in ANNs; the core idea is to obtain explanations of why the adversarial case happened by observing the pattern activation behavior of a subset of neurons described by the given invariant.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_34" ], "mid": [ "2943722115", "2594877703", "2793633339" ], "abstract": [ "We present techniques for automatically inferring invariant properties of feed-forward neural networks. Our insight is that feed forward networks should be able to learn a decision logic that is captured in the activation patterns of its neurons. We propose to extract such decision patterns that can be considered as invariants of the network with respect to a certain output behavior. We present techniques to extract input invariants as convex predicates on the input space, and layer invariants that represent features captured in the hidden layers. We apply the techniques on the networks for the MNIST and ACASXU applications. Our experiments highlight the use of invariants in a variety of applications, such as explainability, providing robustness guarantees, detecting adversaries, simplifying proofs and network distillation.", "Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.", "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety-critical domains. However, traditional software test coverage metrics cannot be applied directly to DNNs. In this paper, inspired by the MC DC coverage criterion, we propose a family of four novel test criteria that are tailored to structural features of DNNs and their semantics. We validate the criteria by demonstrating that the generated test inputs guided via our proposed coverage criteria are able to capture undesired behaviours in a DNN. Test cases are generated using a symbolic approach and a gradient-based heuristic search. By comparing them with existing methods, we show that our criteria achieve a balance between their ability to find bugs (proxied using adversarial examples) and the computational cost of test case generation. Our experiments are conducted on state-of-the-art DNNs obtained using popular open source datasets, including MNIST, CIFAR-10 and ImageNet." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
@cite_13 also proposed a novel approach for automatically identifying safe regions of inputs w.r.t. some labels. The core idea is to identify safe regions w.r.t. labeled targets, i.e., providing a specific safety guarantee that a robust region is robust enough against adversarial perturbations w.r.t. to a target label. As the notion of safety robustness in ANNs is a strong term for many ANNs, the target robustness is the main property. The technique works with clustering and verification. Clustering technique is used to split the dataset into a subset of inputs with the same labels, then each cluster is verified by Reluplex @cite_9 to provide the safety region w.r.t. the target label. The tool proposed is called DeepSafe, which is evaluated on trained ANNs by the dataset MNIST and ACAS XU.
{ "cite_N": [ "@cite_9", "@cite_13" ], "mid": [ "2594877703", "2761709036" ], "abstract": [ "Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.", "Deep neural networks have become widely used, obtaining remarkable results in domains such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, and bio-informatics, where they have produced results comparable to human experts. However, these networks can be easily fooled by adversarial perturbations: minimal changes to correctly-classified inputs, that cause the network to mis-classify them. This phenomenon represents a concern for both safety and security, but it is currently unclear how to measure a network's robustness against such perturbations. Existing techniques are limited to checking robustness around a few individual input points, providing only very limited guarantees. We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations. The approach is data-guided, relying on clustering to identify well-defined geometric regions as candidate safe regions. We then utilize verification techniques to confirm that these regions are safe or to provide counter-examples showing that they are not safe. We also introduce the notion of targeted robustness which, for a given target label and region, ensures that a NN does not map any input in the region to the target label. We evaluated our technique on the MNIST dataset and on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our approach identified multiple regions which were completely safe as well as some which were only safe for specific labels. It also discovered several adversarial perturbations of interest." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
In addition to ESBMC-GPU, there exist other tools able to verify CUDA programs and each one of them uses its approach and targets specific property violations. However, given the current knowledge in software verification, ESBMC-GPU is the first verifier to check for adversarial cases and coverage methods in ANNs implemented in CUDA. For instance, GPUVerify @cite_10 is based on synchronous, delayed visibility semantics, which focuses on detecting data race and barrier divergence, while reducing kernel verification procedures for the analysis of sequential programs. GPU+KLEE (GKLEE) @cite_23 , in turn, is a concrete and symbolic execution tool, which considers both kernels and main functions, while checking deadlocks, memory coalescing, data race, warp divergence, and compilation level issues. Also, Concurrency Intermediate Verification Language (CIVL) @cite_1 , a framework for static analysis and concurrent program verification, uses abstract syntax tree and partial order reduction to detect user-specified assertions, deadlocks, memory leaks, invalid pointer dereference, array out-of-bounds, and division by zero.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_23" ], "mid": [ "2245258423", "2121717408", "2076960126" ], "abstract": [ "CIVL is a framework for static analysis and verification of concurrent programs. One of the main challenges to practical application of these techniques is the large number of ways to express concurrency: MPI, OpenMP, CUDA, and Pthreads, for example, are just a few of many \"concurrency dialects\" in wide use today. These dialects are constantly evolving and it is increasingly common to use several of them in a single \"hybrid\" program. CIVL addresses these problems by providing a concurrency intermediate verification language, CIVL-C, as well as translators that consume C programs using these dialects and produce CIVL-C. Analysis and verification tools which operate on CIVL-C can then be applied easily to a wide variety of concurrent C programs. We demonstrate CIVL's error detection and verification capabilities on (1) an MPI+OpenMP program that estimates &#x03C0; and contains a subtle race condition, and (2) an MPI-based 1d-wave simulator that fails to conform to a simple sequential implementation.", "We present a technique for verifying race- and divergence-freedom of GPU kernels that are written in mainstream kernel programming languages such as OpenCL and CUDA. Our approach is founded on a novel formal operational semantics for GPU programming termed synchronous, delayed visibility (SDV) semantics. The SDV semantics provides a precise definition of barrier divergence in GPU kernels and allows kernel verification to be reduced to analysis of a sequential program, thereby completely avoiding the need to reason about thread interleavings, and allowing existing modular techniques for program verification to be leveraged. We describe an efficient encoding for data race detection and propose a method for automatically inferring loop invariants required for verification. We have implemented these techniques as a practical verification tool, GPUVerify, which can be applied directly to OpenCL and CUDA source code. We evaluate GPUVerify with respect to a set of 163 kernels drawn from public and commercial sources. Our evaluation demonstrates that GPUVerify is capable of efficient, automatic verification of a large number of real-world kernels.", "Programs written for GPUs often contain correctness errors such as races, deadlocks, or may compute the wrong result. Existing debugging tools often miss these errors because of their limited input-space and execution-space exploration. Existing tools based on conservative static analysis or conservative modeling of SIMD concurrency generate false alarms resulting in wasted bug-hunting. They also often do not target performance bugs (non-coalesced memory accesses, memory bank conflicts, and divergent warps). We provide a new framework called GKLEE that can analyze C++ GPU programs, locating the aforesaid correctness and performance bugs. For these programs, GKLEE can also automatically generate tests that provide high coverage. These tests serve as concrete witnesses for every reported bug. They can also be used for downstream debugging, for example to test the kernel on the actual hardware. We describe the architecture of GKLEE, its symbolic virtual machine model, and describe previously unknown bugs and performance issues that it detected on commercial SDK kernels. We describe GKLEE's test-case reduction heuristics, and the resulting scalability improvement for a given coverage target." ] }
1907.12933
2965255081
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). In particular, we further develop the efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
Our approach implemented on top of ESBMC-GPU has some similarities with other techniques described here, e.g., covering methods proposed by @cite_34 , model checking to solve adversarial cases proposed by @cite_32 . However, the main contribution is our requirements and how we handle the actual implementations of ANNs. To run our proposed safety verification, only the ANNs with weights and bias descriptors and the desired input of the dataset is required. For tools such as DeepConcolic @cite_34 and DLV @cite_32 , obtaining adversarial cases or safety guarantees for different ANNs is not an easy task due to the focus given to the famous datasets as MNIST @cite_17 or CIFAR-10 @cite_22 during the tool development. In our proposed approach, there exists no need for providing specific datasets, but only the desired dataset sample to be verified. Besides these requirements, it is necessary for the user to know how cuDNN @cite_21 deals with ANNs.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_32", "@cite_34", "@cite_17" ], "mid": [ "", "1667652561", "2543296129", "2793633339", "2750384547" ], "abstract": [ "", "We present a library that provides optimized implementations for deep learning primitives. Deep learning workloads are computationally intensive, and optimizing the kernels of deep learning workloads is difficult and time-consuming. As parallel architectures evolve, kernels must be reoptimized for new processors, which makes maintaining codebases difficult over time. Similar issues have long been addressed in the HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS) [2]. However, there is no analogous library for deep learning. Without such a library, researchers implementing deep learning workloads on parallel processors must create and optimize their own implementations of the main computational kernels, and this work must be repeated as new parallel processors emerge. To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads. Our implementation contains routines for GPUs, and similarly to the BLAS library, could be implemented for other platforms. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage. For example, integrating cuDNN into Caffe, a popular framework for convolutional networks, improves performance by 36 on a standard model while also reducing memory consumption.", "Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.", "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety-critical domains. However, traditional software test coverage metrics cannot be applied directly to DNNs. In this paper, inspired by the MC DC coverage criterion, we propose a family of four novel test criteria that are tailored to structural features of DNNs and their semantics. We validate the criteria by demonstrating that the generated test inputs guided via our proposed coverage criteria are able to capture undesired behaviours in a DNN. Test cases are generated using a symbolic approach and a gradient-based heuristic search. By comparing them with existing methods, we show that our criteria achieve a balance between their ability to find bugs (proxied using adversarial examples) and the computational cost of test case generation. Our experiments are conducted on state-of-the-art DNNs obtained using popular open source datasets, including MNIST, CIFAR-10 and ImageNet.", "We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at this https URL" ] }
1907.12821
2966519285
Pseudo-Boolean monotone functions are unimodal functions which are trivial to optimize for some hillclimbers, but are challenging for a surprising number of evolutionary algorithms (EAs). A general trend is that EAs are efficient if parameters like the mutation rate are set conservatively, but may need exponential time otherwise. In particular, it was known that the @math -EA and the @math -EA can optimize every monotone function in pseudolinear time if the mutation rate is @math for some @math . The second part of the statement was also known for the @math -EA. In this paper we show that the first statement does not apply to the @math -EA. More precisely, we prove that for every constant @math there is a constant integer @math such that the @math -EA with mutation rate @math and population size @math needs superpolynomial time to optimize some monotone functions. Thus, increasing the population size by just a constant has devastating effects on the performance. This is in stark contrast to many other benchmark functions on which increasing the population size either increases the performance significantly, or affects performance mildly. The reason why larger populations are harmful lies in the fact that larger populations may temporarily decrease selective pressure on parts of the population. This allows unfavorable mutations to accumulate in single individuals and their descendants. If the population moves sufficiently fast through the search space, such unfavorable descendants can become ancestors of future generations, and the bad mutations are preserved. Remarkably, this effect only occurs if the population renews itself sufficiently fast, which can only happen far away from the optimum. This is counter-intuitive since usually optimization gets harder as we approach the optimum.
The analysis of EAs on monotone functions started in 2010 by the work of Doerr, Jansen, Sudholt, Winzen and Zarges @cite_15 @cite_10 . Their contribution was twofold: firstly, they showed that the , which flips each bit independently with static mutation rate @math , needs time @math on all monotone functions if the mutation parameter @math is a constant strictly smaller than one. This result was already implicit in @cite_6 .
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_6" ], "mid": [ "1892716203", "2109000177", "2093184507" ], "abstract": [ "Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotone. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability p(n) = c n can make a decisive difference. We show that if c > 1, then the (1+1) EA finds the optimum of every such function in Θ(n log n) iterations. For c = 1, we can still prove an upper bound of O(n3 2). However, for c > 33, we present a strictly monotone function such that the (1+1) EA with overwhelming probability does not find the optimum within 2Ω(n) iterations. This is the first time that we observe that a constant factor change of the mutation probability changes the run-time by more than constant factors.", "Extending previous analyses on function classes like linear functions, we analyze how the simple 1+1 evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotonic. These functions have the property that whenever only 0-bits are changed to 1, then the objective value strictly increases. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability pn=c n can make a decisive difference. We show that if c iterations. For c=1, we can still prove an upper bound of On3 2. However, for , we present a strictly monotonic function such that the 1+1 EA with overwhelming probability needs iterations to find the optimum. This is the first time that we observe that a constant factor change of the mutation probability changes the runtime by more than a constant factor.", "Evolutionary algorithms are randomized search heuristics that are often described as robust general purpose problem solvers. It is known, however, that the performance of an evolutionary algorithm may be very sensitive to the setting of some of its parameters. A different perspective is to investigate changes in the expected optimization time due to small changes in the fitness landscape. A class of fitness functions where the expected optimization time of the (1+1) evolutionary algorithm is of the same magnitude for almost all of its members is the set of linear fitness functions. Using linear functions as a starting point, a model of a fitness landscape is devised that incorporates important properties of linear functions. Unexpectedly, the expected optimization time of the (1+1) evolutionary algorithm is clearly larger for this fitness model than on linear functions." ] }
1907.12821
2966519285
Pseudo-Boolean monotone functions are unimodal functions which are trivial to optimize for some hillclimbers, but are challenging for a surprising number of evolutionary algorithms (EAs). A general trend is that EAs are efficient if parameters like the mutation rate are set conservatively, but may need exponential time otherwise. In particular, it was known that the @math -EA and the @math -EA can optimize every monotone function in pseudolinear time if the mutation rate is @math for some @math . The second part of the statement was also known for the @math -EA. In this paper we show that the first statement does not apply to the @math -EA. More precisely, we prove that for every constant @math there is a constant integer @math such that the @math -EA with mutation rate @math and population size @math needs superpolynomial time to optimize some monotone functions. Thus, increasing the population size by just a constant has devastating effects on the performance. This is in stark contrast to many other benchmark functions on which increasing the population size either increases the performance significantly, or affects performance mildly. The reason why larger populations are harmful lies in the fact that larger populations may temporarily decrease selective pressure on parts of the population. This allows unfavorable mutations to accumulate in single individuals and their descendants. If the population moves sufficiently fast through the search space, such unfavorable descendants can become ancestors of future generations, and the bad mutations are preserved. Remarkably, this effect only occurs if the population renews itself sufficiently fast, which can only happen far away from the optimum. This is counter-intuitive since usually optimization gets harder as we approach the optimum.
Most other work on population-based algorithms has shown benefits of larger population sizes, especially when crossover is used @cite_11 @cite_12 @cite_2 @cite_14 . The only exception in which a population has theoretically been proven to be severely disadvantageous is on Ignoble Trails. This rather specific function has been carefully designed to lead into a trap for crossover operators @cite_13 , and it is deceptive for @math if crossover is used, but not for @math . Arguably, the functions are also rather artificial, although they were not specifically designed to be deceptive for populations. However, regarding the larger and more natural framework of monotone functions, our results imply that a with mutation parameter @math does not optimize all monotone functions efficiently if @math is too large, while the corresponding is efficient.
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2011559870", "2015353056", "2160436193", "2497618134", "2803852436" ], "abstract": [ "Understanding the impact of crossover on performance is a major problem in the theory of genetic algorithms (GAs). We present new insight on working principles of crossover by analyzing the performance of crossover-based GAs on the simple functions OneMax and Jump. First, we assess the potential speedup by crossover when combined with a fitness-invariant bit shuffling operator that simulates a lineage of independent evolution on a function of unitation. Theoretical and empirical results show drastic speedups for both functions. Second, we consider a simple GA without shuffling and investigate the interplay of mutation and crossover on Jump. If the crossover probability is small, subsequent mutations create sufficient diversity, even for very small populations. Contrarily, with high crossover probabilities crossover tends to lose diversity more quickly than mutation can create it. This has a drastic impact on the performance on Jump. We complement our theoretical findings by Monte Carlo simulations on the population diversity.", "Evolutionary algorithms (EAs) are increasingly popular approaches to multi-objective optimization. One of their significant advantages is that they can directly optimize the Pareto front by evolving a population of solutions, where the recombination (also called crossover) operators are usually employed to reproduce new and potentially better solutions by mixing up solutions in the population. Recombination in multi-objective evolutionary algorithms is, however, mostly applied heuristically. In this paper, we investigate how from a theoretical viewpoint a recombination operator will affect a multi-objective EA. First, we employ artificial benchmark problems: the Weighted LPTNO problem (a generalization of the well-studied LOTZ problem), and the well-studied COCZ problem, for studying the effect of recombination. Our analysis discloses that recombination may accelerate the filling of the Pareto front by recombining diverse solutions and thus help solve multi-objective optimization. Because of this, for these two problems, we find that a multi-objective EA with recombination enabled achieves a better expected running time than any known EAs with recombination disabled. We further examine the effect of recombination on solving the multi-objective minimum spanning tree problem, which is an NP-hard problem. Following our finding on the artificial problems, our analysis shows that recombination also helps accelerate filling the Pareto front and thus helps find approximate solutions faster.", "Beginning with the early days of the genetic algorithm and the schema theorem it has often been argued that the crossover operator is the more important genetic operator. The early Royal Road functions were put forth as an example where crossover would excel, yet mutation based EAs were subsequently shown to experimentally outperform GAs with crossover on these functions. Recently several new Royal Roads have been introduced and proved to require expected polynomial time for GAs with crossover, while needing exponential time to optimize for mutation-only EAs. This paper does the converse, showing proofs that GAs with crossover require exponential optimization time on new Ignoble Trail functions while mutation based EAs optimize them efficiently.", "Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noise-handling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noise-handling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the Compact Genetic Algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property. This extended abstract summarizes our work \"The Benefit of Recombination in Noisy Evolutionary Search,\" which appeared in Proceedings of International Symposium on Algorithms and Computation (ISAAC), 2015, pp. 140--150.", "For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work." ] }
1907.12821
2966519285
Pseudo-Boolean monotone functions are unimodal functions which are trivial to optimize for some hillclimbers, but are challenging for a surprising number of evolutionary algorithms (EAs). A general trend is that EAs are efficient if parameters like the mutation rate are set conservatively, but may need exponential time otherwise. In particular, it was known that the @math -EA and the @math -EA can optimize every monotone function in pseudolinear time if the mutation rate is @math for some @math . The second part of the statement was also known for the @math -EA. In this paper we show that the first statement does not apply to the @math -EA. More precisely, we prove that for every constant @math there is a constant integer @math such that the @math -EA with mutation rate @math and population size @math needs superpolynomial time to optimize some monotone functions. Thus, increasing the population size by just a constant has devastating effects on the performance. This is in stark contrast to many other benchmark functions on which increasing the population size either increases the performance significantly, or affects performance mildly. The reason why larger populations are harmful lies in the fact that larger populations may temporarily decrease selective pressure on parts of the population. This allows unfavorable mutations to accumulate in single individuals and their descendants. If the population moves sufficiently fast through the search space, such unfavorable descendants can become ancestors of future generations, and the bad mutations are preserved. Remarkably, this effect only occurs if the population renews itself sufficiently fast, which can only happen far away from the optimum. This is counter-intuitive since usually optimization gets harder as we approach the optimum.
Moreover, Lengler and Schaller pointed out an interesting connection between functions and a dynamic optimization problem in @cite_19 , which is arguably more natural. In that paper, the algorithm should optimize a linear function with positive weights, but the weights of the objective function are re-drawn each round (independently and identically distributed). This setting is similar to monotone functions, since a one-bit is always preferable over a zero-bit, and the all-one string is always the global optimum. However, the weight of each bit changes from round to round, which somewhat resembles that the function switches between different hot topics as the algorithm progresses. @cite_19 the was studied, and the behavior in the dynamic setting is very similar to the behavior on functions. It remains open whether the effects observed in our paper carry over to this dynamic setting.
{ "cite_N": [ "@cite_19" ], "mid": [ "2914741526" ], "abstract": [ "We study the well-known black-box optimisation algorithm (1+1)-EA on a novel type of noise model. In our noise model, the fitness function is linear with positive weights, but the absolute values of the weights may fluctuate in each round. Thus in every state, the fitness function indicates that one-bits are preferred over zero-bits. In particular, hillclimbing heuristics should be able to find the optimum fast. We show that the (1+1)-EA indeed finds the optimum in time @math if the mutation parameter is @math for a constant @math However, we also show that for @math the (1+1)-EA needs superpolynomial time to find the optimum. Thus the choice of mutation parameter is critical even for optimisation tasks in which there is a clear path to the the optimum. A similar threshold phenomenon has recently been shown for noise-free monotone fitness functions." ] }
1907.12736
2965903589
We present a simple yet effective prediction module for a one-stage detector. The main process is conducted in a coarse-to-fine manner. First, the module roughly adjusts the default boxes to well capture the extent of target objects in an image. Second, given the adjusted boxes, the module aligns the receptive field of the convolution filters accordingly, not requiring any embedding layers. Both steps build a propose-and-attend mechanism, mimicking two-stage detectors in a highly efficient manner. To verify its effectiveness, we apply the proposed module to a basic one-stage detector SSD. Our final model achieves an accuracy comparable to that of state-of-the-art detectors while using a fraction of their model parameters and computational overheads. Moreover, we found that the proposed module has two strong applications. 1) The module can be successfully integrated into a lightweight backbone, further pushing the efficiency of the one-stage detector. 2) The module also allows train-from-scratch without relying on any sophisticated base networks as previous methods do.
Two-stage detectors @cite_1 @cite_33 are composed of two parts. The first part generates a sparse set of region proposals, and the second part further classifies and regresses the proposals. These two-stage detectors have occupied top entries of challenging benchmarks @cite_33 @cite_32 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_32", "@cite_1", "@cite_33" ], "mid": [ "", "2565639579", "2613718673", "2407521645" ], "abstract": [ "", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn." ] }
1907.12861
2965663461
We introduce LEAF-QA, a comprehensive dataset of @math densely annotated figures charts, constructed from real-world open data sources, along with 2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering. To this end, LEAF-Net, a deep architecture involving chart element localization, question and answer encoding in terms of chart elements, and an attention network is proposed. Different experiments are conducted to demonstrate the challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also considerably advances the current state-of-the-art on FigureQA and DVQA.
There has been recent interest in analyzing figures and charts, particularly to understand the type of visualization and for data extraction from the chart images. @cite_26 describe algorithms to extract data from pie and bar charts, particularly to re-visualize them. Further, interactive methods for bar chart extraction have been studied @cite_34 @cite_22 . @cite_14 describe an object detection framework for extracting scatter plot elements. Similarly, an analysis for line plot extraction has been presented by @cite_19 . There have also been attempts at indexing of figures @cite_35 @cite_11 for search and classification. B "o @cite_15 and @cite_8 describe methods for improving text and symbol extraction from figures. @cite_7 describe a framework to restyle different kinds of visualizations, through maneuvering the data in the SVGs.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_19", "@cite_15", "@cite_34", "@cite_11" ], "mid": [ "2103558289", "2607825849", "2053604034", "", "2048349970", "2725765016", "2139488215", "2075058343", "2595457065", "" ], "abstract": [ "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10 and the classification process is very scalable, yet achieves 85 accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.", "Charts are an excellent way to convey patterns and trends in data, but they do not facilitate further modeling of the data or close inspection of individual data points. We present a fully automated system for extracting the numerical values of data points from images of scatter plots. We use deep learning techniques to identify the key components of the chart, and optical character recognition together with robust regression to map from pixels to the coordinate system of the chart. We focus on scatter plots with linear scales, which already have several interesting challenges. Previous work has done fully automatic extraction for other types of charts, but to our knowledge this is the first approach that is fully automatic for scatter plots. Our method performs well, achieving successful data extraction on 89 of the plots in our test set.", "Poorly designed charts are prevalent in reports, magazines, books and on the Web. Most of these charts are only available as bitmap images; without access to the underlying data it is prohibitively difficult for viewers to create more effective visual representations. In response we present ReVision, a system that automatically redesigns visualizations to improve graphical perception. Given a bitmap image of a chart as input, ReVision applies computer vision and machine learning techniques to identify the chart type (e.g., pie chart, bar chart, scatterplot, etc.). It then extracts the graphical marks and infers the underlying data. Using a corpus of images drawn from the web, ReVision achieves image classification accuracy of 96 across ten chart categories. It also accurately extracts marks from 79 of bar charts and 62 of pie charts, and from these charts it successfully extracts data from 71 of bar charts and 64 of pie charts. ReVision then applies perceptually-based design principles to populate an interactive gallery of redesigned charts. With this interface, users can view alternative chart designs and retarget content to different visual styles.", "", "The D3 JavaScript library has become a ubiquitous tool for developing visualizations on the Web. Yet, once a D3 visualization is published online its visual style is difficult to change. We present a pair of tools for deconstructing and restyling existing D3 visualizations. Our deconstruction tool analyzes a D3 visualization to extract the data, the marks and the mappings between them. Our restyling tool lets users modify the visual attributes of the marks as well as the mappings from the data to these attributes. Together our tools allow users to easily modify D3 visualizations without examining the underlying code and we show how they can be used to deconstruct and restyle a variety of D3 visualizations.", "We investigate how to automatically recover visual encodings from a chart image, primarily using inferred text elements. We contribute an end-to-end pipeline which takes a bitmap image as input and returns a visual encoding specification as output. We present a text analysis pipeline which detects text elements in a chart, classifies their role e.g., chart title, x-axis label, y-axis title, etc., and recovers the text content using optical character recognition. We also train a Convolutional Neural Network for mark type classification. Using the identified text elements and graphical mark type, we can then infer the encoding specification of an input chart image. We evaluate our techniques on three chart corpora: a set of automatically labeled charts generated using Vega, charts from the Quartz news website, and charts extracted from academic papers. We demonstrate accurate automatic inference of text elements, mark types, and chart specifications across a variety of input chart types.", "Information graphics, such as graphs and plots, are used in technical documents to convey information to humans and to facilitate greater understanding. Usually, graphics are a key component in a technical document, as they enable the author to convey complex ideas in a simplified visual format. However, in an automatic text recognition system, which are typically used to digitize documents, the ideas conveyed in a graphical format are lost. We contend that the message or extracted information can be used to help better understand the ideas conveyed in the document. In scientific papers, line plots are the most commonly used graphic to represent experimental results in the form of correlation present between values represented on the axes. The contribution of our work is in the series of image processing algorithms that are used to automatically extract relevant information, including text and plot from graphics found in technical documents. We validate the approach by performing the experiments on a dataset of line plots obtained from scientific documents from computer science conference papers and evaluate the variation of a reconstructed curve from the original curve. Our algorithm achieves a classification accuracy of 91 across the dataset and successfully extracts the axes from 92 of line plots. Axes label extraction and line curve tracing are performed successfully in about half the line plots as well.", "Existing research on analyzing information graphics assume to have a perfect text detection and extraction available. However, text extraction from information graphics is far from solved. To fill this gap, we propose a novel processing pipeline for multi-oriented text extraction from infographics. The pipeline applies a combination of data mining and computer vision techniques to identify text elements, cluster them into text lines, compute their orientation, and uses a state-of-the-art open source OCR engine to perform the text recognition. We evaluate our method on 121 infographics extracted from an open access corpus of scientific publications. The results show that our approach is effective and significantly outperforms a state-of-the-art baseline.", "Charts are commonly used to present data in digital documents such as web pages, research papers, or presentation slides. When the underlying data is not available, it is necessary to extract the data from a chart image to utilize the data for further analysis or improve the chart for more accurate perception. In this paper, we present ChartSense, an interactive chart data extraction system. ChartSense first determines the chart type of a given chart image using a deep learning based classifier, and then extracts underlying data from the chart image using semi-automatic, interactive extraction algorithms optimized for each chart type. To evaluate chart type classification accuracy, we compared ChartSense with ReVision, a system with the state-of-the-art chart type classifier. We found that ChartSense was more accurate than ReVision. In addition, to evaluate data extraction performance, we conducted a user study, comparing ChartSense with WebPlotDigitizer, one of the most effective chart data extraction tools among publicly accessible ones. Our results showed that ChartSense was better than WebPlotDigitizer in terms of task completion time, error rate, and subjective preference.", "" ] }
1907.12861
2965663461
We introduce LEAF-QA, a comprehensive dataset of @math densely annotated figures charts, constructed from real-world open data sources, along with 2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering. To this end, LEAF-Net, a deep architecture involving chart element localization, question and answer encoding in terms of chart elements, and an attention network is proposed. Different experiments are conducted to demonstrate the challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also considerably advances the current state-of-the-art on FigureQA and DVQA.
Learning to answer questions based on natural images has been an area of extensive research in recent years. Several datasets including DAQUAR @cite_6 , COCO-QA @cite_16 , VQA @cite_2 , Visual7w @cite_9 and MovieQA @cite_33 have been proposed to explore different facets of question answering on natural images and videos. Correspondingly, methods using attention @cite_29 @cite_32 @cite_10 , neural modules @cite_24 and compositional modeling @cite_23 have been explored. There has been related work on question answering on synthetic data @cite_27 @cite_30 . However, the current work is most related to recent work on multimodal question answering @cite_17 @cite_25 , which show that current VQA do not perform well while reasoning on text in natural images, and hence, there is a need to learn image and scene text jointly for question answering.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_9", "@cite_29", "@cite_32", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_2", "@cite_16", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "", "2963890755", "2962749469", "1514535095", "", "2151498684", "2964118342", "2561715562", "2963383024", "2950761309", "1575833922", "", "", "" ], "abstract": [ "", "We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler \"Who\" did \"What\" to \"Whom\", to \"Why\" and \"How\" certain events occurred. Each question comes with a set of five possible answers, a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information – video clips, plots, subtitles, scripts, and DVS [32]. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.", "We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model's capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.", "", "We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test.", "Visual question answering is fundamentally compositional in nature—a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural \"modules\" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.", "When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.", "", "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "", "", "" ] }
1907.12646
2964433578
In this paper, we propose a noise-aware exposure control algorithm for robust robot vision. Our method aims to capture the best-exposed image which can boost the performance of various computer vision and robotics tasks. For this purpose, we carefully design an image quality metric which captures complementary quality attributes and ensures light-weight computation. Specifically, our metric consists of a combination of image gradient, entropy, and noise metrics. The synergy of these measures allows preserving sharp edge and rich texture in the image while maintaining a low noise level. Using this novel metric, we propose a real-time and fully automatic exposure and gain control technique based on the Nelder-Mead method. To illustrate the effectiveness of our technique, a large set of experimental results demonstrates higher qualitative and quantitative performances when compared with conventional approaches.
Capturing a well-exposed image is an essential condition to apply any vision based algorithms under challenging environments. In this paper, we define the term from a robotics point of view, as an image containing texture details, sharp object boundaries with low noise, saturation, and blur. In fact, these conditions are desirable to achieve various tasks such as visual-SLAM @cite_15 that requires robust and repeatable keypoints detection, instance segmentation @cite_0 that requires sharp object boundaries, and object classification where even an imperceptible noise may lead to misclassification @cite_3 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_3" ], "mid": [ "", "2535547924", "2953047670" ], "abstract": [ "", "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.", "Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images." ] }
1907.12646
2964433578
In this paper, we propose a noise-aware exposure control algorithm for robust robot vision. Our method aims to capture the best-exposed image which can boost the performance of various computer vision and robotics tasks. For this purpose, we carefully design an image quality metric which captures complementary quality attributes and ensures light-weight computation. Specifically, our metric consists of a combination of image gradient, entropy, and noise metrics. The synergy of these measures allows preserving sharp edge and rich texture in the image while maintaining a low noise level. Using this novel metric, we propose a real-time and fully automatic exposure and gain control technique based on the Nelder-Mead method. To illustrate the effectiveness of our technique, a large set of experimental results demonstrates higher qualitative and quantitative performances when compared with conventional approaches.
The main problem of gradient-based metrics is their tendency to favor high exposures, which, in turn leads to over-exposed images. To avoid such problem, Kim al @cite_17 proposed a gradient weighting scheme based on local image entropy. The optimal exposure is estimated via a Bayesian optimization framework, which finds the global solution by estimating the surrogate models. However, the complexity of the Bayesian optimization and weighting scheme does not allow real-time ability.
{ "cite_N": [ "@cite_17" ], "mid": [ "2889607906" ], "abstract": [ "Under- and oversaturation can cause severe image degradation in many vision-based robotic applications. To control camera exposure in dynamic lighting conditions, we introduce a novel metric for image information measure. Measuring an image gradient is typical when evaluating its level of image detail. However, emphasizing more informative pixels substantially improves the measure within an image. By using this entropy weighted image gradient, we introduce an optimal exposure value for vision-based approaches. Using this newly invented metric, we also propose an effective exposure control scheme that covers a wide range of light conditions. When evaluating the function (e.g., image frame grab) is expensive, the next best estimation needs to be carefully considered. Through Bayesian optimization, the algorithm can estimate the optimal exposure value with minimal cost. We validated the proposed image information measure and exposure control scheme via a series of thorough experiments using various exposure conditions." ] }
1907.12782
2964885899
Bluetooth Low Energy (BLE) has become an intrinsic wireless technology for the Internet of Things (IoT). With the proliferation of BLE-embedded IoT devices, it is important to study the security and privacy implications of BLE. The forefront attack to BLE devices is the wireless sniffing attack, which would lead to more detrimental threats like jamming, encryption cracking or system penetration. Existing sniffing attacks are based on the correct detection of BLE connection initiation state, but they become ineffective for BLE long-lived connections. In this paper, we focus on the adversary setting with a low-cost single radio and develop a suite of real-time algorithms to determine the key parameters necessary to follow and sniff a BLE connection in the connected state. We implement our algorithms in the open source platform -Ubertooth One and evaluate its performance in terms of sniffing overhead and accuracy. By comparing with state-of-the-art schemes, experimental results show that our sniffer achieves much higher sniffing accuracy (over 80 ) and better stability to BLE operational dynamics.
Aforementioned sniffing attacks are based on multi-radio platforms which could be prohibitively expensive for less-capable adversaries. Currently, one popular platform Ubertooth One is an open-source, single-radio and cheap Bluetooth sniffer that was developed by Ryan @cite_17 . This is a powerful sniffer that is relied on observing advertisement packets and looking for AFH parameters to follow. As stated earlier it is difficult to sniff a BLE connection after the connection has been already established. In @cite_17 , Ryan imply how AFH parameters can be extracted from a long-lived BLE connection via jamming. Moreover, Ryan also demonstrate a proof-of-concept of key re-negotiation and man-in-the-middle (MitM) attack to BLE devices.
{ "cite_N": [ "@cite_17" ], "mid": [ "58703277" ], "abstract": [ "We discuss our tools and techniques to monitor and inject packets in Bluetooth Low Energy. Also known as BTLE or Bluetooth Smart, it is found in recent high-end smartphones, sports devices, sensors, and will soon appear in many medical devices. We show that we can effectively render useless the encryption of any Bluetooth Low Energy link." ] }
1907.12648
2966026106
In multi-agent path finding (MAPF) the task is to navigate agents from their starting positions to given individual goals. The problem takes place in an undirected graph whose vertices represent positions and edges define the topology. Agents can move to neighbor vertices across edges. In the standard MAPF, space occupation by agents is modeled by a capacity constraint that permits at most one agent per vertex. We suggest an extension of MAPF in this paper that permits more than one agent per vertex. Propositional satisfiability (SAT) models for these extensions of MAPF are studied. We focus on modeling capacity constraints in SAT-based formulations of MAPF and evaluation of performance of these models. We extend two existing SAT-based formulations with vertex capacity constraints: MDD-SAT and SMT-CBS where the former is an approach that builds the model in an eager way while the latter relies on lazy construction of the model.
The idea behind the SAT-based approach is to construct a propositional formula @math such that it is satisfiable if and only if a solution of a given MAPF of sum-of-costs @math exists @cite_3 . Moreover, the approach is constructive; that is, @math exactly reflects the MAPF instance and if satisfiable, solution of MAPF can be reconstructed from satisfying assignment of the formula. We say @math to be a complete propositional model of MAPF.
{ "cite_N": [ "@cite_3" ], "mid": [ "2739829010" ], "abstract": [ "This paper deals with solving cooperative path finding (CPF) problems in a makespan-optimal way. A feasible solution to the CPF problem lies in the moving of mobile agents where each agent has unique initial and goal positions. The abstraction adopted in CPF assumes that agents are discrete units that move over an undirected graph by traversing its edges. We focus specifically on makespan-optimal solutions to the CPF problem where the task is to generate solutions that are as short as possible in terms of the total number of time steps required for all agents to reach their goal positions. We demonstrate that reducing CPF to propositional satisfiability (SAT) represents a viable way to obtain makespan-optimal solutions. Several ways of encoding CPFs into propositional formulae are proposed and evaluated both theoretically and experimentally. Encodings based on the log and direct representations of decision variables are compared. The evaluation indicates that SAT-based solutions to CPF outperform the makespan-optimal versions of such search-based CPF solvers such as OD+ID, CBS, and ICTS in highly constrained scenarios (i.e., environments that are densely occupied by agents and where interactions among the agents are frequent). Moreover, the experiments clearly show that CPF encodings based on the direct representation of variables can be solved faster, although they are less space-efficient than log encodings." ] }
1907.12648
2966026106
In multi-agent path finding (MAPF) the task is to navigate agents from their starting positions to given individual goals. The problem takes place in an undirected graph whose vertices represent positions and edges define the topology. Agents can move to neighbor vertices across edges. In the standard MAPF, space occupation by agents is modeled by a capacity constraint that permits at most one agent per vertex. We suggest an extension of MAPF in this paper that permits more than one agent per vertex. Propositional satisfiability (SAT) models for these extensions of MAPF are studied. We focus on modeling capacity constraints in SAT-based formulations of MAPF and evaluation of performance of these models. We extend two existing SAT-based formulations with vertex capacity constraints: MDD-SAT and SMT-CBS where the former is an approach that builds the model in an eager way while the latter relies on lazy construction of the model.
A common measure how to reduce the number of decision variables derived from the time expansion is the use of multi-value decision diagrams (MDDs) @cite_14 . The basic observation that holds for MAPF is that an agent can reach vertices in the distance @math (distance of a vertex is measured as the length of the shortest path) from the current position of the agent no earlier than in the @math -th time step. Analogical observation can be made with respect to the distance from the goal position.
{ "cite_N": [ "@cite_14" ], "mid": [ "1973396716" ], "abstract": [ "We address the problem of optimal pathfinding for multiple agents. Given a start state and a goal state for each of the agents, the task is to find minimal paths for the different agents while avoiding collisions. Previous work on solving this problem optimally, used traditional single-agent search variants of the A* algorithm. We present a novel formalization for this problem which includes a search tree called the increasing cost tree (ICT) and a corresponding search algorithm, called the increasing cost tree search (ICTS) that finds optimal solutions. ICTS is a two-level search algorithm. The high-level phase of ICTS searches the increasing cost tree for a set of costs (cost per agent). The low-level phase of ICTS searches for a valid path for every agent that is constrained to have the same cost as given by the high-level phase. We analyze this new formalization, compare it to the A* search formalization and provide the pros and cons of each. Following, we show how the unique formalization of ICTS allows even further pruning of the state space by grouping small sets of agents and identifying unsolvable combinations of costs. Experimental results on various domains show the benefits and limitations of our new approach. A speedup of up to 3 orders of magnitude was obtained in some cases." ] }
1907.12743
2966860738
Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9 accuracy gain over "Source only" from 73.9 to 81.8 on "HMDB --> UCF", and 10.3 gain on "Kinetics --> Gameplay"). The code and data are released at this http URL
With the rise of deep convolutional neural networks (CNNs), recent work for video classification mainly aims to learn compact spatio-temporal representations by leveraging CNNs for spatial information and designing various architectures to exploit temporal dynamics @cite_55 . In addition to separating spatial and temporal learning, some works propose different architectures to encode spatio-temporal representations with consideration of the trade-off between performance and computational cost @cite_21 @cite_63 @cite_42 @cite_43 . Another branch of work utilizes optical flow to compensate for the lack of temporal information in raw RGB frames @cite_62 @cite_28 @cite_65 @cite_63 @cite_14 . Moreover, some works extract temporal dependencies between frames for video tasks by utilizing recurrent neural networks (RNNs) @cite_37 , attention @cite_22 @cite_17 and relation modules @cite_44 . Note that we focus on attending to the temporal dynamics to effectively align domains and we consider other modalities, e.g. optical flow, to be complementary to our method.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_14", "@cite_22", "@cite_28", "@cite_55", "@cite_21", "@cite_42", "@cite_65", "@cite_44", "@cite_43", "@cite_63", "@cite_17" ], "mid": [ "2951183276", "2156303437", "", "2962899219", "", "2016053056", "1522734439", "", "", "2770804203", "", "", "" ], "abstract": [ "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "", "Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4 in terms of the top-1 and 94.0 in terms of the top-5 accuracy on the validation set.", "", "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "", "", "Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos (Code and models are available at http: relation.csail.mit.edu .).", "", "", "" ] }
1907.12743
2966860738
Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9 accuracy gain over "Source only" from 73.9 to 81.8 on "HMDB --> UCF", and 10.3 gain on "Kinetics --> Gameplay"). The code and data are released at this http URL
Most recent DA approaches are based on deep learning architectures designed for addressing the domain shift problems given the fact that the deep CNN features without any DA method outperform traditional DA methods using hand-crafted features @cite_29 . Most DA approaches follow the two-branch (source and target) architecture, and aim to find a common feature space between the source and target domains. The models are therefore optimized with a combination of and losses @cite_52 .
{ "cite_N": [ "@cite_29", "@cite_52" ], "mid": [ "2155541015", "2756073160" ], "abstract": [ "We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.", "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions." ] }
1907.12704
2964609265
Unsupervised feature learning for point clouds has been vital for large-scale point cloud understanding. Recent deep learning based methods depend on learning global geometry from self-reconstruction. However, these methods are still suffering from ineffective learning of local geometry, which significantly limits the discriminability of learned features. To resolve this issue, we propose MAP-VAE to enable the learning of global and local geometry by jointly leveraging global and local self-supervision. To enable effective local self-supervision, we introduce multi-angle analysis for point clouds. In a multi-angle scenario, we first split a point cloud into a front half and a back half from each angle, and then, train MAP-VAE to learn to predict a back half sequence from the corresponding front half sequence. MAP-VAE performs this half-to-half prediction using RNN to simultaneously learn each local geometry and the spatial relationship among them. In addition, MAP-VAE also learns global geometry via self-reconstruction, where we employ a variational constraint to facilitate novel shape generation. The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.
Deep learning models have led to significant progress in feature learning for 3D shapes @cite_27 @cite_18 @cite_33 @cite_4 @cite_17 @cite_37 @cite_32 @cite_12 @cite_3 @cite_25 . Here, we focus on reviewing studies on point clouds. For supervised methods, supervised information, such as shape class labels or segmentation labels, are required to train deep learning models in the feature learning process. In contrast, unsupervised methods are designed to mine self-supervision information from point clouds for training, which eliminates the need for supervised information that can be tedious to obtain. We briefly review the state-of-the-art methods in these two categories as follows.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_4", "@cite_33", "@cite_32", "@cite_3", "@cite_27", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "", "2963048248", "2793251487", "2615230425", "2945774199", "2921891839", "2517836489", "2780605533", "2965157662", "2890018557" ], "abstract": [ "", "", "The discriminability of the bag-of-words representations can be increased via encoding the spatial relationship among virtual words on 3D shapes. However, this encoding task involves several issues, including arbitrary mesh resolutions , irregular vertex topology , orientation ambiguity on 3D surface , invariance to rigid , and non-rigid shape transformations . To address these issues, a novel unsupervised spatial learning framework based on deep neural network, deep spatiality (DS), is proposed. Specifically, DS employs two novel components: spatial context extractor and deep context learner . Spatial context extractor extracts the spatial relationship among virtual words in a local region into a raw spatial representation . Along a consistent circular direction , a directed circular graph is constructed to encode relative positions between pairwise virtual words in each face ring into a relative spatial matrix . By decomposing each relative spatial matrix using singular value decomposition, the raw spatial representation is formed, from which deep context learner conducts unsupervised learning of the global and local features. Deep context learner is a deep neural network with a novel model structure to adapt the proposed coupled softmax layer , which encodes not only the discriminative information among local regions but also the one among global shapes. Experimental results show that DS outperforms state-of-the-art methods.", "Highly discriminative 3D shape representations can be formed by encoding the spatial relationship among virtual words into the Bag of Words (BoW) method. To achieve this challenging task, several unresolved issues in the encoding procedure must be overcome for 3D shapes, including: 1) arbitrary mesh resolution ; 2) irregular vertex topology ; 3) orientation ambiguity on the 3D surface ; and 4) invariance to rigid and non-rigid shape transformations . In this paper, a novel spatially enhanced 3D shape representation called bag of spatial context correlations (BoSCCs) is proposed to address all these issues. Adopting a novel local perspective, BoSCC is able to describe a 3D shape by an occurrence frequency histogram of spatial context correlation patterns, which makes BoSCC become more compact and discriminative than previous global perspective-based methods. Specifically, the spatial context correlation is proposed to simultaneously encode the geometric and spatial information of a 3D local region by the correlation among spatial contexts of vertices in that region, which effectively resolves the aforementioned issues. The spatial context of each vertex is modeled by Markov chains in a multi-scale manner, which thoroughly captures the spatial relationship by the transition probabilities of intra-virtual words and the ones of inter-virtual words. The high discriminability and compactness of BoSCC are effective for classification and retrieval, especially in the scenarios of limited samples and partial shape retrieval. Experimental results show that BoSCC outperforms the state-of-the-art spatially enhanced BoW methods in three common applications: global shape retrieval, shape classification, and partial shape retrieval.", "", "Learning 3D global features by aggregating multiple views is important. Pooling is widely used to aggregate views in deep learning models. However, pooling disregards a lot of content information within views and the spatial relationship among the views, which limits the discriminability of learned features. To resolve this issue, 3D to Sequential Views (3D2SeqViews) is proposed to more effectively aggregate the sequential views using convolutional neural networks with a novel hierarchical attention aggregation. Specifically, the content information within each view is first encoded. Then, the encoded view content information and the sequential spatiality among the views are simultaneously aggregated by the hierarchical attention aggregation, where view-level attention and class-level attention are proposed to hierarchically weight sequential views and shape classes. View-level attention is learned to indicate how much attention is paid to each view by each shape class, which subsequently weights sequential views through a novel recursive view integration. Recursive view integration learns the semantic meaning of view sequence, which is robust to the first view position. Furthermore, class-level attention is introduced to describe how much attention is paid to each shape class, which innovatively employs the discriminative ability of the fine-tuned network. 3D2SeqViews learns more discriminative features than the state-of-the-art, which leads to the outperforming results in shape classification and retrieval under three large-scale benchmarks.", "Extracting local features from 3D shapes is an important and challenging task that usually requires carefully designed 3D shape descriptors. However, these descriptors are hand-crafted and require intensive human intervention with prior knowledge. To tackle this issue, we propose a novel deep learning model, namely circle convolutional restricted Boltzmann machine (CCRBM), for unsupervised 3D local feature learning. CCRBM is specially designed to learn from raw 3D representations. It effectively overcomes obstacles such as irregular vertex topology, orientation ambiguity on the 3D surface, and rigid or slightly non-rigid transformation invariance in the hierarchical learning of 3D data that cannot be resolved by the existing deep learning models. Specifically, by introducing the novel circle convolution, CCRBM holds a novel ring-like multi-layer structure to learn 3D local features in a structure preserving manner. Circle convolution convolves across 3D local regions via rotating a novel circular sector convolution window in a consistent circular direction. In the process of circle convolution, extra points are sampled in each 3D local region and projected onto the tangent plane of the center of the region. In this way, the projection distances in each sector window are employed to constitute a novel local raw 3D representation called projection distance distribution (PDD). In addition, to eliminate the initial location ambiguity of a sector window, the Fourier transform modulus is used to transform the PDD into the Fourier domain, which is then conveyed to CCRBM. Experiments using the learned local features are conducted on three aspects: global shape retrieval, partial shape retrieval, and shape correspondence. The experimental results show that the learned local features outperform other state-of-the-art 3D shape descriptors.", "Effective 3-D local features are significant elements for 3-D shape analysis. Existing hand-crafted 3-D local descriptors are effective but usually involve intensive human intervention and prior knowledge, which burdens the subsequent processing procedures. An alternative resorts to the unsupervised learning of features from raw 3-D representations via popular deep learning models. However, this alternative suffers from several significant unresolved issues, such as irregular vertex topology, arbitrary mesh resolution, orientation ambiguity on the 3-D surface, and rigid and slightly nonrigid transformation invariance. To tackle these issues, we propose an unsupervised 3-D local feature learning framework based on a novel permutation voxelization strategy to learn high-level and hierarchical 3-D local features from raw 3-D voxels. Specifically, the proposed strategy first applies a novel voxelization which discretizes each 3-D local region with irregular vertex topology and arbitrary mesh resolution into regular voxels, and then, a novel permutation is applied to permute the voxels to simultaneously eliminate the effect of rotation transformation and orientation ambiguity on the surface. Based on the proposed strategy, the permuted voxels can fully encode the geometry and structure of each local region in regular, sparse, and binary vectors. These voxel vectors are highly suitable for the learning of hierarchical common surface patterns by stacked sparse autoencoder with hierarchical abstraction and sparse constraint. Experiments are conducted on three aspects for evaluating the learned local features: 1) global shape retrieval; 2) partial shape retrieval; and 3) shape correspondence. The experimental results show that the learned local features outperform the other state-of-the-art 3-D shape descriptors.", "", "Learning 3D global features by aggregating multiple views has been introduced as a successful strategy for 3D shape analysis. In recent deep learning models with end-to-end training, pooling is a widely adopted procedure for view aggregation. However, pooling merely retains the max or mean value over all views, which disregards the content information of almost all views and also the spatial information among the views. To resolve these issues, we propose Sequential Views To Sequential Labels (SeqViews2SeqLabels) as a novel deep learning model with an encoder–decoder structure based on recurrent neural networks (RNNs) with attention. SeqViews2SeqLabels consists of two connected parts, an encoder-RNN followed by a decoder-RNN, that aim to learn the global features by aggregating sequential views and then performing shape classification from the learned global features, respectively. Specifically, the encoder-RNN learns the global features by simultaneously encoding the spatial and content information of sequential views, which captures the semantics of the view sequence. With the proposed prediction of sequential labels, the decoder-RNN performs more accurate classification using the learned global features by predicting sequential labels step by step. Learning to predict sequential labels provides more and finer discriminative information among shape classes to learn, which alleviates the overfitting problem inherent in training using a limited number of 3D shapes. Moreover, we introduce an attention mechanism to further improve the discriminative ability of SeqViews2SeqLabels. This mechanism increases the weight of views that are distinctive to each shape class, and it dramatically reduces the effect of selecting the first view position. Shape classification and retrieval results under three large-scale benchmarks verify that SeqViews2SeqLabels learns more discriminative global features by more effectively aggregating sequential views than state-of-the-art methods." ] }
1907.12704
2964609265
Unsupervised feature learning for point clouds has been vital for large-scale point cloud understanding. Recent deep learning based methods depend on learning global geometry from self-reconstruction. However, these methods are still suffering from ineffective learning of local geometry, which significantly limits the discriminability of learned features. To resolve this issue, we propose MAP-VAE to enable the learning of global and local geometry by jointly leveraging global and local self-supervision. To enable effective local self-supervision, we introduce multi-angle analysis for point clouds. In a multi-angle scenario, we first split a point cloud into a front half and a back half from each angle, and then, train MAP-VAE to learn to predict a back half sequence from the corresponding front half sequence. MAP-VAE performs this half-to-half prediction using RNN to simultaneously learn each local geometry and the spatial relationship among them. In addition, MAP-VAE also learns global geometry via self-reconstruction, where we employ a variational constraint to facilitate novel shape generation. The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.
As a pioneering work, PointNet @cite_28 was proposed to directly learn features from point clouds by deep learning models. However, PointNet is limited in capturing contextual information among points. To resolve this issue, various techniques were proposed to establish graph in a local region to capture the relationship among points in the region @cite_5 @cite_38 @cite_19 @cite_2 @cite_23 . Furthermore, multi-scale analysis @cite_20 was introduced to extract more semantic features from the local region by separating points into scales or bins, and then, aggregating these features by concatenation @cite_15 or RNN @cite_36 . These methods require supervised information in the feature learning process, which is different from unsupervised approach in MAP-VAE.
{ "cite_N": [ "@cite_38", "@cite_28", "@cite_36", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2963719584", "2560609797", "2963123724", "2785053089", "2963158438", "2902302021", "2963053547", "2798270772", "2963121255" ], "abstract": [ "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "", "Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating largely in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked or recurrently applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds. The proposed approach achieves state-of-the-art performance on standard benchmarks including ModelNet40 and S3DIS.", "Deep neural networks have enjoyed remarkable success for various vision tasks, however it remains challenging to apply CNNs to domains lacking a regular underlying structures such as 3D point clouds. Towards this we propose a novel convolutional architecture, termed SpiderCNN, to efficiently extract geometric features from point clouds. SpiderCNN is comprised of units called SpiderConv, which extend convolutional operations from regular grids to irregular point sets that can be embedded in ( R ^n ), by parametrizing a family of convolutional filters. We design the filter as a product of a simple step function that captures local geodesic information and a Taylor polynomial that ensures the expressiveness. SpiderCNN inherits the multi-scale hierarchical architecture from classical CNNs, which allows it to extract semantic deep features. Experiments on ModelNet40 demonstrate that SpiderCNN achieves state-of-the-art accuracy (92.4 ) on standard benchmarks, and shows competitive performance on segmentation task.", "We present a simple and general framework for feature learning from point cloud. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point cloud are irregular and unordered, thus a direct convolving of kernels against the features associated with the points will result in deserting the shape information while being variant to the orders. To address these problems, we propose to learn a X-transformation from the input points, which is used for simultaneously weighting the input features associated with the points and permuting them into latent potentially canonical order. Then element-wise product and sum operations of typical convolution operator are applied on the X-transformed features. The proposed method is a generalization of typical CNNs into learning features from point cloud, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http: www.merl.com research license#KCNet", "We tackle the problem of point cloud recognition. Unlike previous approaches where a point cloud is either converted into a volume image or represented independently in a permutation-invariant set, we develop a new representation by adopting the concept of shape context as the building block in our network design. The resulting model, called ShapeContextNet, consists of a hierarchy with modules not relying on a fixed grid while still enjoying properties similar to those in convolutional neural networks - being able to capture and propagate the object part information. In addition, we find inspiration from self-attention based models to include a simple yet effective contextual modeling mechanism - making the contextual region selection, the feature aggregation, and the feature transformation process fully automatic. ShapeContextNet is an end-to-end model that can be applied to the general point cloud classification and segmentation problems. We observe competitive results on a number of benchmark datasets.", "Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
The cross-depiction problem refers to the task of recognising visual objects regardless of their depiction whether realistic or artistic. It is an under-researched area. Some work uses constellation models, e.g. Crowley and Zisserman use a DPM to learn figurative art on Greek vases @cite_27 . Others develop the problem of searching a database of photographs based on a sketch query; edge-based HOG was explored in @cite_32 . Li @cite_11 developed rich sketch representations for sketch matching using both local features and global structures of sketches. Others have investigated sketch based retrieval of video @cite_30 @cite_29 . Wu @cite_15 provide a non-neural fully-connected constellation model that is stable across depictions.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_32", "@cite_27", "@cite_15", "@cite_11" ], "mid": [ "2083118758", "2119061765", "2128543433", "1964939554", "114678062", "2027125558" ], "abstract": [ "We describe a new system for searching video databases using free-hand sketched queries. Our query sketches depict both object appearance and motion, and are annotated with keywords that indicate the semantic category of each object. We parse space-time volumes from video to form graph representation, which we match to sketches under a Markov Random Field (MRF) optimization. The MRF energy function is used to rank videos for relevance and contains unary, pairwise and higher-order potentials that reflect the colour, shape, motion and type of sketched objects. We evaluate performance over a dataset of 500 sports footage clips.", "We present a novel Content Based Video Retrieval (CBVR) system, driven by free-hand sketch queries depicting both objects and their movement (via dynamic cues; streak-lines and arrows). Our main contribution is a probabilistic model of video clips (based on Linear Dynamical Systems), leading to an algorithm for matching descriptions of sketched objects to video. We demonstrate our model fitting to clips under static and moving camera conditions, exhibiting linear and oscillatory motion. We evaluate retrieval on two real video data sets, and on a video data set exhibiting controlled variation in shape, color, motion and clutter.", "We present an image retrieval system for the interactive search of photo collections using free-hand sketches depicting shape. We describe Gradient Field HOG (GF-HOG); an adapted form of the HOG descriptor suitable for Sketch Based Image Retrieval (SBIR). We incorporate GF-HOG into a Bag of Visual Words (BoVW) retrieval framework, and demonstrate how this combination may be harnessed both for robust SBIR, and for localizing sketched objects within an image. We evaluate over a large Flickr sourced dataset comprising 33 shape categories, using queries from 10 non-expert sketchers. We compare GF-HOG against state-of-the-art descriptors with common distance measures and language models for image retrieval, and explore how affine deformation of the sketch impacts search performance. GF-HOG is shown to consistently outperform retrieval versus SIFT, multi-resolution HOG, Self Similarity, Shape Context and Structure Tensor. Further, we incorporate semantic keywords into our GF-HOG system to enable the use of annotated sketches for image search. A novel graph-based measure of semantic similarity is proposed and two applications explored: semantic sketch based image retrieval and a semantic photo montage.", "", "Visual object classification and detection are major problems in contemporary computer vision. State-of-art algorithms allow thousands of visual objects to be learned and recognized, under a wide range of variations including lighting changes, occlusion, point of view and different object instances. Only a small fraction of the literature addresses the problem of variation in depictive styles (photographs, drawings, paintings etc.). This is a challenging gap but the ability to process images of all depictive styles and not just photographs has potential value across many applications. In this paper we model visual classes using a graph with multiple labels on each node; weights on arcs and nodes indicate relative importance (salience) to the object description. Visual class models can be learned from examples from a database that contains photographs, drawings, paintings etc. Experiments show that our representation is able to improve upon Deformable Part Models for detection and Bag of Words models for classification.", "Sketch recognition aims to automatically classify human hand sketches of objects into known categories. This has become increasingly a desirable capability due to recent advances in human computer interaction on portable devices. The problem is nontrivial because of the sparse and abstract nature of hand drawings as compared to photographic images of objects, compounded by a highly variable degree of details in human sketches. To this end, we present a method for the representation and matching of sketches by exploiting not only local features but also global structures of sketches, through a star graph based ensemble matching strategy. Different local feature representations were evaluated using the star graph model to demonstrate the effectiveness of the ensemble matching of structured features. We further show that by encapsulating holistic structure matching and learned bag-of-features models into a single framework, notable recognition performance improvement over the state-of-the-art can be observed. Extensive comparative experiments were carried out using the currently largest sketch dataset released by [15], with over 20,000 sketches of 250 object categories generated by AMT (Amazon Mechanical Turk) crowd-sourcing." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Deep learning has recently emerged as a truly significant development in Computer Vision. It has been successful on conventional databases, and over a wide range of tasks, with recognition rates in excess of @math Other than this paper, we know of only two studies aimed at assessing the performance of well established methods on the cross depiction problem. Crowley and Zisserman @cite_18 use a subset of the Your Paintings' dataset @cite_2 , the subset decided by those that have been tagged with VOC categories @cite_4 . Using 11 classes, and objects that can only scale and translate, they report an overall drop in per class Prec@k (at @math ) from 0.98 when trained and tested on paintings alone, to 0.66 when trained on photographs and tested on paintings. Hu and Collomosse @cite_32 use 33 shape categories in Flickr to compare a range of descriptors: SIFT, multi-resolution HOG, Self Similarity, Shape Context, Structure Tensor, and (their contribution) Gradient Field HOG. They test a collection of 8 distance measures, reporting low mean average precision rates in all cases. Our focus is on domain-shift via meta-learning, and therefore concentrate our review on that area.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_32", "@cite_2" ], "mid": [ "2011615896", "2031489346", "2128543433", "" ], "abstract": [ "The objective of this work is to recognize object categories (such as animals and vehicles) in paintings, whilst learning these categories from natural images. This is a challenging problem given the substantial differences between paintings and natural images, and variations in depiction of objects in paintings. We first demonstrate that classifiers trained on natural images of an object category have quite some success in retrieving paintings containing that category. We then draw upon recent work in mid-level discriminative patches to develop a novel method for reranking paintings based on their spatial consistency with natural images of an object category. This method combines both class based and instance based retrieval in a single framework. We quantitatively evaluate the method over a number of classes from the PASCAL VOC dataset, and demonstrate significant improvements in rankings of the retrieved paintings over a variety of object categories.", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "We present an image retrieval system for the interactive search of photo collections using free-hand sketches depicting shape. We describe Gradient Field HOG (GF-HOG); an adapted form of the HOG descriptor suitable for Sketch Based Image Retrieval (SBIR). We incorporate GF-HOG into a Bag of Visual Words (BoVW) retrieval framework, and demonstrate how this combination may be harnessed both for robust SBIR, and for localizing sketched objects within an image. We evaluate over a large Flickr sourced dataset comprising 33 shape categories, using queries from 10 non-expert sketchers. We compare GF-HOG against state-of-the-art descriptors with common distance measures and language models for image retrieval, and explore how affine deformation of the sketch impacts search performance. GF-HOG is shown to consistently outperform retrieval versus SIFT, multi-resolution HOG, Self Similarity, Shape Context and Structure Tensor. Further, we incorporate semantic keywords into our GF-HOG system to enable the use of annotated sketches for image search. A novel graph-based measure of semantic similarity is proposed and two applications explored: semantic sketch based image retrieval and a semantic photo montage.", "" ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Datasets exhibit bias, which can be problematic. In photographic image recognition, bias for particular camera settings and other attributes can prevent models generalising well @cite_12 . This motivated the collection of the multi-domain VLCS dataset: an aggregation of photos from Caltech, LabelMe, Pascal VOC 2007 and SUN09 @cite_12 . Until recently, domain adaptation and generalisation in image recognition focused on transfer across photo-only benchmarks. Now, more datasets are available that cover larger domains shift across more varying depictive styles @cite_15 @cite_31 @cite_13 and better reflect the cross-depiction problem. We make use of the PACS dataset provided by Li al @cite_31 . As a domain generalisation benchmark, where one domain is an unseen target domain, PACS is a far more challenging task than photographic benchmarks. Li al @cite_31 measured an average KL-divergence @cite_14 of 0.85 between training and test domains across PACS, compared to 0.07 across VLCS.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2951454049", "1965555277", "114678062", "2962837952", "2031342017" ], "abstract": [ "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.", "", "Visual object classification and detection are major problems in contemporary computer vision. State-of-art algorithms allow thousands of visual objects to be learned and recognized, under a wide range of variations including lighting changes, occlusion, point of view and different object instances. Only a small fraction of the literature addresses the problem of variation in depictive styles (photographs, drawings, paintings etc.). This is a challenging gap but the ability to process images of all depictive styles and not just photographs has potential value across many applications. In this paper we model visual classes using a graph with multiple labels on each node; weights on arcs and nodes indicate relative importance (salience) to the object description. Visual class models can be learned from examples from a database that contains photographs, drawings, paintings etc. Experiments show that our representation is able to improve upon Deformable Part Models for detection and Bag of Words models for classification.", "Computer vision systems are designed to work well within the context of everyday photography. However, artists often render the world around them in ways that do not resemble photographs. Artwork produced by people is not constrained to mimic the physical world, making it more challenging for machines to recognize.,,This work is a step toward teaching machines how to categorize images in ways that are valuable to humans. First, we collect a large-scale dataset of contemporary artwork from Behance, a website containing millions of portfolios from professional and commercial artists. We annotate Behance imagery with rich attribute labels for content, emotions, and artistic media. Furthermore, we carry out baseline experiments to show the value of this dataset for artistic style prediction, for improving the generality of existing object classifiers, and for the study of visual domain adaptation. We believe our Behance Artistic Media dataset will be a good starting point for researchers wishing to study artistic imagery and relevant problems. This dataset can be found at https: bam-dataset.org", "Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Domain Adaptation (DA) attempts to compensate for bias by adapting a model constructed on one domain to a target domain using examples from that new domain, e.g. @cite_8 . DA has been used in the cross depiction problem with both non-neural @cite_17 and neural algorithms, such as the Domain Separation Network (DSN) @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_17", "@cite_8" ], "mid": [ "2511131004", "2270586871", "1722318740" ], "abstract": [ "The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.", "The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It introduces great challenge as the variance across photo and art domains is much larger than either alone. We extensively evaluate classification, domain adaptation and detection benchmarks for leading techniques, demonstrating that none perform consistently well given the cross-depiction problem. Finally we refine the DPM model, based on query expansion, enabling it to bridge the gap across depiction boundaries to some extent.", "Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions." ] }
1907.12622
2965300374
The cross-depiction problem refers to the task of recognising visual objects regardless of their depictions; whether photographed, painted, sketched, etc . In the past, some researchers considered cross-depiction to be domain adaptation (DA). More recent work considers cross-depiction as domain generalisation (DG), in which algorithms extend recognition from one set of domains (such as photographs and coloured artwork) to another (such as sketches). We show that fixing the last layer of AlexNet to random values provides a performance comparable to state of the art DA and DG algorithms, when tested over the PACS benchmark. With support from background literature, our results lead us to conclude that texture alone is insufficient to support generalisation; rather, higher-order representations such as structure and shape are necessary.
Recently, Domain Generalisation (DG) approaches have gained attention. These differ from DA in that DG algorithms have no access to the target domain. General approaches include learning domain invariant representations, or deriving domain agnostic classifiers by assuming individual domains' classifiers consist of domain-specific and domain-agnostic components, then extracting the latter @cite_26 . Examples of relevance here are Domain Multi-Task Auto Encoders (D-MTAE) @cite_19 and Deeper-Broader-Artier'' network (DBA-DG) @cite_31 . Most recently, MetaReg @cite_0 and MLDG @cite_7 exhibit state of the art performance on the PACS dataset.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_0", "@cite_19", "@cite_31" ], "mid": [ "1852255964", "2963043696", "", "1920962657", "2951454049" ], "abstract": [ "The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets.", "Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.", "", "The problem of domain generalization is to take knowledge acquired from a number of related domains, where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. The algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization.", "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research." ] }
1907.12707
2966474827
Multiple-antenna backscatter is emerging as a promising approach to offer high communication performance for the data-intensive applications of ambient backscatter communications (AmBC). Although much has been understood about multiple-antenna backscatter in conventional backscatter communications (CoBC), existing analytical models cannot be directly applied to AmBC due to the structural differences in RF source and tag circuit designs. This paper takes the first step to fill the gap, by exploring the use of spatial modulation (SM) in AmBC whenever tags are equipped with multiple antennas. Specifically, we present a practical multiple-antenna backscatter design for AmBC that exempts tags from the inter-antenna synchronization and mutual coupling problems while ensuring high spectral efficiency and ultra-low power consumption. We obtain an optimal detector for the joint detection of both backscatter signal and source signal based on the maximum likelihood principle. We also design a two-step algorithm to derive bounds on the bit error rate (BER) of both signals. Simulation results validate the analysis and show that the proposed scheme can significantly improve the throughput compared with traditional systems.
Of the existing studies in AmBC, the majority has focused solely on multiple antennas at the reader, while ignoring the tags. To address the direct link interference from ambient RF source, multiple-antenna readers are proposed in @cite_9 @cite_1 for the joint detection of the legacy and backscatter systems. Previous works on the multiple-antenna backscatter applied to AmBC systems are sparse. Though the achievable rate has been studied in AmBC with multiple-antenna tags @cite_1 , this work does not consider the effect of time synchronization and mutual coupling problems between antennas. A blind detector for the reader to detect the backscatter signal is proposed in @cite_4 . This work solely focuses on the detection mechanism for the reader to detect the backscatter signal and does not improve the data rate. Departure from these studies, our work considers both the feasibility of real-world implementation and the high spectral efficiency in AmBC with multiple-antenna tags.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_4" ], "mid": [ "2962732150", "2624922208", "2916694736" ], "abstract": [ "Ambient backscatter communication (AmBC) enables a passive backscatter device to transmit information to a reader using ambient RF signals, and has emerged as a promising solution to green Internet-of-Things (IoT). Conventional AmBC receivers are interested in recovering the information from the ambient backscatter device (A-BD) only. In this paper, we propose a cooperative AmBC (CABC) system in which the reader recovers information not only from the A-BD, but also from the RF source. We first establish the system model for the CABC system from spread spectrum and spectrum sharing perspectives. Then, for flat fading channels, we derive the optimal maximum-likelihood (ML) detector, suboptimal linear detectors as well as successive interference-cancellation (SIC) based detectors. For frequency-selective fading channels, the system model for the CABC system over ambient orthogonal frequency division multiplexing carriers is proposed, upon which a low-complexity optimal ML detector is derived. For both kinds of channels, the bit-error-rate expressions for the proposed detectors are derived in closed forms. Finally, extensive numerical results have shown that, when the A-BD signal and the RF-source signal have equal symbol period, the proposed SIC-based detectors can achieve near-ML detection performance for typical application scenarios, and when the A-BD symbol period is longer than the RF-source symbol period, the existence of backscattered signal in the CABC system can enhance the ML detection performance of the RF-source signal, thanks to the beneficial effect of the backscatter link when the A-BD transmits at a lower rate than the RF source.", "In ambient rescatter communications, devices convey information by modulating and rescattering the radio frequency signals impinging on their antennas. In this correspondence, we consider a system consisting of a legacy modulated continuous carrier multiple-input-multiple-output link and a multiantenna modulated rescatter (MRS) node, where the MRS node modulates and rescatters the signal generated by the legacy transmitter. The receiver seeks to decode both the original message and the information added by the MRS. We show that the achievable sum rate of this system exceeds that which the legacy system could achieve alone. We further consider the impact of channel estimation errors under the least squares channel estimation and study the achievable rate of the legacy and MRS systems, where a linear minimum mean square error receiver with successive interference cancellation is utilized for joint decoding.", "Recently, ambient backscatter that utilizes surrounding radio frequency (RF) signals for both power and communications, has attracted vast interest since it can free sensors and tags from batteries and has extensive applications in Internet of Things (IoT), Existing studies about ambient backscatter often assume single antenna for each tag. Actually, as we show in this paper, equipping tags with multiple antennas can enlarge communication distance, enhance detection performance, and thus be practically useful. One key challenge of using multiple-antenna tags is the signal detection at the reader because the tag may have limited power and can transmit few training symbols. Therefore, in this paper, we design a blind detector based on F-test for the reader to recover tag signals without any knowledge of RF signals power, noise variance and all channel state information (CSI). Furthermore, we derive the lower and upper bounds of detection probabilities, and its exact expression in a special case. The optimal antenna selection scheme is also proposed to maximize the detection probability. Finally, simulation results are provided to corroborate our theoretical studies." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
@cite_14 introduced an index called inspired from the misclassification error used in supervised learning. Consider that one of the two compared clusterings ( @math for instance) corresponds to the true labels of each observation and the other clustering ( @math ) to the predicted ones. The supervised classification error may be computed for all the possible permutations of the predicted labels (in @math ), and the maximum error over all the permutations may be taken. Thus the classification error for comparing both partitions may be written as
{ "cite_N": [ "@cite_14" ], "mid": [ "2127042504" ], "abstract": [ "We compare the three basic algorithms for model-based clustering on high-dimensional discrete-variable datasets. All three algorithms use the same underlying model: a naive-Bayes model with a hidden root node, also known as a multinomial-mixture model. In the first part of the paper, we perform an experimental comparison between three batch algorithms that learn the parameters of this model: the Expectation–Maximization (EM) algorithm, a “winner take all” version of the EM algorithm reminiscent of the K-means algorithm, and model-based agglomerative clustering. We find that the EM algorithm significantly outperforms the other methods, and proceed to investigate the effect of various initialization methods on the final solution produced by the EM algorithm. The initializations that we consider are (1) parameters sampled from an uninformative prior, (2) random perturbations of the marginal distribution of the data, and (3) the output of agglomerative clustering. Although the methods are substantially different, they lead to learned models that are similar in quality." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
where @math is an injective mapping of @math into @math ( @cite_4 ). The @math index may be complex to compute when the number of clusters is large. A polynomial time algorithm has been proposed by @cite_10 to compute it efficiently. We will study the distributional properties of this index in the next section.
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "2061025330", "2141729166" ], "abstract": [ "We herein introduce a new method of interpretable clustering that uses unsupervised binary trees. It is a three-stage procedure, the first stage of which entails a series of recursive binary splits to reduce the heterogeneity of the data within the new subsamples. During the second stage (pruning), consideration is given to whether adjacent nodes can be aggregated. Finally, during the third stage (joining), similar clusters are joined together, even if they do not share the same parent originally. Consistency results are obtained, and the procedure is used on simulated and real data sets.", "This paper views clusterings as elements of a lattice. Distances between clusterings are analyzed in their relationship to the lattice. From this vantage point, we first give an axiomatic characterization of some criteria for comparing clusterings, including the variation of information and the unadjusted Rand index. Then we study other distances between partitions w.r.t these axioms and prove an impossibility result: there is no \"sensible\" criterion for comparing clusterings that is simultaneously (1) aligned with the lattice of partitions, (2) convexely additive, and (3) bounded." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
The entropy of a partition @math is defined by @math where @math is the estimate of the probability that an element is in cluster @math . The can be used to measure the independence of two partitions @math and @math . It is given by: @math , where @math is the estimate of the probability that an element belongs to cluster @math of @math and @math of @math . Mutual information is a metric over the space of all clusterings, but its value is not bounded which makes it difficult to interpret. As @math , other bounded indices have been proposed such as ( @cite_2 , @cite_5 ) where @math is divided either by the arithmetic or the geometric mean of the clustering entropies. Meila ( @cite_7 ) has also proposed an index based on Mutual information called .
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_2" ], "mid": [ "2103704311", "2129070834", "2097645701" ], "abstract": [ "We address the problem of robust clustering by combining data partitions (forming a clustering ensemble) produced by multiple clusterings. We formulate robust clustering under an information-theoretical framework; mutual information is the underlying concept used in the definition of quantitative measures of agreement or consistency between data partitions. Robustness is assessed by variance of the cluster membership, based on bootstrapping. We propose and analyze a voting mechanism on pairwise associations of patterns for combining data partitions. We show that the proposed technique attempts to optimize the mutual information based criteria, although the optimality is not ensured in all situations. This evidence accumulation method is demonstrated by combining the well-known K-means algorithm to produce clustering ensembles. Experimental results show the ability of the technique to identify clusters with arbitrary shapes and sizes.", "This paper proposes an information theoretic criterion for comparing two partitions, or clusterings, of the same data set. The criterion, called variation of information (VI), measures the amount of information lost and gained in changing from clustering ( C ) to clustering ( C ' ). The criterion makes no assumptions about how the clusterings were generated and applies to both soft and hard clusterings. The basic properties of VI are presented and discussed from the point of view of comparing clusterings. In particular, the VI is positive, symmetric and obeys the triangle inequality. Thus, surprisingly enough, it is a true metric on the space of clusterings.", "This paper introduces the problem of combining multiple partitionings of a set of objects into a single consolidated clustering without accessing the features or algorithms that determined these partitionings. We first identify several application scenarios for the resultant 'knowledge reuse' framework that we call cluster ensembles. The cluster ensemble problem is then formalized as a combinatorial optimization problem in terms of shared mutual information. In addition to a direct maximization approach, we propose three effective and efficient techniques for obtaining high-quality combiners (consensus functions). The first combiner induces a similarity measure from the partitionings and then reclusters the objects. The second combiner is based on hypergraph partitioning. The third one collapses groups of clusters into meta-clusters which then compete for each object to determine the combined clustering. Due to the low computational costs of our techniques, it is quite feasible to use a supra-consensus function that evaluates all three approaches against the objective function and picks the best solution for a given situation. We evaluate the effectiveness of cluster ensembles in three qualitatively different application scenarios: (i) where the original clusters were formed based on non-identical sets of features, (ii) where the original clustering algorithms worked on non-identical sets of objects, and (iii) where a common data-set is used and the main purpose of combining multiple clusterings is to improve the quality and robustness of the solution. Promising results are obtained in all three situations for synthetic as well as real data-sets." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
In @cite_19 and @cite_13 several indices are compared on artificially simulated partitions with various configurations; partitions are either balanced or unbalanced, dependent or independent, varying number of clusters. They show that the indices based on set overlaps have better performance than those based on counting pairs and mutual information. Besides, most indices are not relevant when the clusters in the partitions are imbalanced.
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2103894895", "2323180518" ], "abstract": [ "For highly imbalanced data sets, almost all the instances are labeled as one class, whereas far fewer examples are labeled as the other classes. In this paper, we present an empirical comparison of seven different clustering evaluation indices when used to assess partitions generated from highly imbalanced data sets. Some of the metrics are based on matching of sets (F-measure), information theory (normalized mutual information and adjusted mutual information), and pair of objects counting (Rand and adjusted Rand indices). We also investigate the BCubed metric, which takes into account the concepts of recall, precision, as well as counting pairs. Furthermore, in order to avoid the class size imbalance effect, we propose a modification to the Rand index, referred to as the normalized class size Rand (NCR) index. In terms of results, apart from NCR, our experiments indicate that all the other analyzed indices are not able to deal properly with the problem of class size imbalance.", "Comparing two clustering results of a data set is a challenging task in cluster analysis. Many external validity measures have been proposed in the literature. A good measure should be invariant to the changes of data size, cluster size, and number of clusters. We give an overview of existing set matching indexes and analyze their properties. Set matching measures are based on matching clusters from two clusterings. We analyze the measures in three parts: 1) cluster similarity, 2) matching, and 3) overall measurement. Correction for chance is also investigated and we prove that normalized mutual information and variation of information are intrinsically corrected. We propose a new scheme of experiments based on synthetic data for evaluation of an external validity index. Accordingly, popular external indexes are evaluated and compared when applied to clusterings of different data size, cluster size, and number of clusters. The experiments show that set matching measures are clearly better than the other tested. Based on the analytical comparisons, we introduce a new index called Pair Sets Index (PSI)." ] }
1907.12797
2966008881
With the aim to propose a non parametric hypothesis test, this paper carries out a study on the Matching Error (ME), a comparison index of two partitions obtained from the same data set, using for example two clustering methods. This index is related to the misclassifica-tion error in supervised learning. Some properties of the ME and, especially, its distribution function for the case of two independent partitions are analyzed. Extensive simulations show the efficiency of the ME and we propose a hypothesis test based on it.
@cite_6 study the behavior of the Rand, Adjusted Rand, Jaccard and Fowlkes Mallows indices. They compare the partitions produced by hierarchical algorithms with the true partitions, varying the number of groups with a sample of 50 observations and conclude that the adjusted Rand index seems to be more appropriate for clustering validation in this context. Similar simulations and results are given in @cite_3 and @cite_8 using @math -means.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_8" ], "mid": [ "2117059686", "2090634555", "2008981950" ], "abstract": [ "A cluster operator takes a set of data points and partitions the points into clusters (subsets). As with any scientific model, the scientific content of a cluster operator lies in its ability to predict results. This ability is measured by its error rate relative to cluster formation. To estimate the error of a cluster operator, a sample of point sets is generated, the algorithm is applied to each point set and the clusters evaluated relative to the known partition according to the distributions, and then the errors are averaged over the point sets composing the sample. Many validity measures have been proposed for evaluating clustering results based on a single realization of the random-point-set process. In this paper we consider a number of proposed validity measures and we examine how well they correlate with error rates across a number of clustering algorithms and random-point-set models. Validity measures fall broadly into three classes: internal validation is based on calculating properties of the resulting clusters; relative validation is based on comparisons of partitions generated by the same algorithm with different parameters or different subsets of the data; and external validation compares the partition generated by the clustering algorithm and a given partition of the data. To quantify the degree of similarity between the validation indices and the clustering errors, we use Kendall's rank correlation between their values. Our results indicate that, overall, the performance of validity indices is highly variable. For complex models or when a clustering algorithm yields complex clusters, both the internal and relative indices fail to predict the error of the algorithm. Some external indices appear to perform well, whereas others do not. We conclude that one should not put much faith in a validity score unless there is evidence, either in terms of sufficient data for model estimation or prior model knowledge, that a validity measure is well-correlated to the error rate of the clustering algorithm.", "Five external criteria were used to evaluate the extent of recovery of the true structure in a hierarchical clustering solution. This was accomplished by comparing the partitions produced by the clustering algorithm with the partition that indicates the true cluster structure known to exist in the data. The five criteria examined were the Rand, the Morey and Agresti adjusted Rand, the Hubert and Arabie adjusted Rand, the Jaccard, and the Fowlkes and Mallows measures. The results of the study indicated that the Hubert and Arabie adjusted Rank index was best suited to the task of comparison across hierarchy levels. Deficiencies with the other measures are noted.", "Cluster validation is an important part of any cluster analysis. External measures such as entropy, purity and mutual information are often used to evaluate K-means clustering. However, whether these measures are indeed suitable for K-means clustering remains unknown. Along this line, in this paper, we show that a data distribution view is of great use to selecting the right measures for K-means clustering. Specifically, we first introduce the data distribution view of K-means, and the resultant uniform effect on highly imbalanced data sets. Eight external measures widely used in recent data mining tasks are also collected as candidates for K-means evaluation. Then, we demonstrate that only three measures, namely the variation of information (VI), the van Dongen criterion (VD) and the Mirkin metric (M), can detect the negative uniform effect of K-means in the clustering results. We also provide new normalization schemes for these three measures, i.e., VI\"n\"o\"r\"m^', VD\"n\"o\"r\"m^' and M\"n\"o\"r\"m^', which enables the cross-data comparisons of clustering qualities. Finally, we explore some properties such as the consistency and sensitivity of the three measures, and give some advice on how to use them in K-means practice." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
Currently neural networks are state-of-the-art in the field of image data classification (e.g. ImageNet @cite_20 ). A variety of neural networks have emerged over the years @cite_3 @cite_0 @cite_23 @cite_11 @cite_15 @cite_14 . These networks started with a simple architecture (e.g. VGG-16 @cite_0 ). They integrated new structure elements like residual @cite_23 and inception blocks @cite_15 as they were developed and proved their superior performance. This development led to an increase of the top1 accuracy on the ImageNet test set from 71.3 the depth and thereby the complexity increased from 23 to 572 layers [2] Values are based on the reference implementation in https: keras.io applications Keras .
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_0", "@cite_23", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "2964350391", "2618530766", "2962835968", "2194775991", "2097117768", "2117539524", "2963446712" ], "abstract": [ "Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question: Are there any benefits to combining Inception architectures with residual connections? Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4 networks, we achieve 3.08 top-5 error on the test set of the ImageNet classification (CLS) challenge.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections&#x2014;one between each layer and its subsequent layer&#x2014;our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
Semantic segmentation gives a classification for every pixel in an image and is an extension of a classification problem. @cite_24 first proposed to use fully convolutional networks to solve semantic segmentation. U-Net @cite_25 is a network for semantic segmentation which was designed for medical images. Often semantic segmentation networks consist of a down- and a upsampling part @cite_24 @cite_25 .
{ "cite_N": [ "@cite_24", "@cite_25" ], "mid": [ "1903029394", "1901129140" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net ." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
However, the current state-of-the-art approaches for image classification and semantic segmentation have two major drawbacks in the context of uncertain local fiber orientation classification. We have 3D data and a high uncertainty for the borders. Most research focuses on 2D data while @cite_12 showed that it is beneficial to use the 3D information for organ segmentation. Networks like PointNet @cite_8 can classify 3D point clouds yet they do not consider dense 3D input as we have. The network 3D-U-Net @cite_16 represents an expansion of U-Net to 3D data. It is typically used to segment 3D objects like organs @cite_16 . This fixes the first drawback while the second one remains. Objects with uncertain borders like our fiber orientations are not well represented.
{ "cite_N": [ "@cite_16", "@cite_12", "@cite_8" ], "mid": [ "2464708700", "2790662084", "2560609797" ], "abstract": [ "This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.", "The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79 and 67 by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
While 3D extensions of Inception-ResNet-v2 have been presented in @cite_2 @cite_22 the usage of 2D pretraining is not so widely used. Parallel to our research proposed a 2D weight transfer strategy to 3D @cite_17 which is most similar to ours (see subsec:weight ).
{ "cite_N": [ "@cite_17", "@cite_22", "@cite_2" ], "mid": [ "2962818993", "2617750261", "2770853452" ], "abstract": [ "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.", "Deep Neural Networks (DNNs) have shown to outperform traditional methods in various visual recognition tasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed method is evaluated using four publicly available databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods.", "The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59 for the binary classification and 7.70 for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
Collagen structures in SHG images have been analyzed in several publications @cite_19 @cite_10 @cite_18 @cite_21 @cite_9 @cite_1 @cite_6 . They were analyzed in tissue @cite_19 and bones @cite_18 @cite_10 . @cite_1 presented how Fourier analysis can be used to investigate the orientation of collagen fibers. The Fourier analysis was extended from small regions to the whole scan in @cite_21 @cite_9 @cite_6 . The analysis classified small image parts as anisotropic, isotropic and dark. These classifications where used to calculate the distributions of classes over an image. In @cite_9 these distributions where used to detect injured tendons. Moreover, @cite_21 showed the change of distribution due to aging can be used to determine the age of pigs. @cite_6 used the 3D information of SHG data and could show an increase in performance.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_10" ], "mid": [ "2621852502", "2017323918", "1980068389", "2056254464", "2016266030", "2077572701", "2766462831" ], "abstract": [ "Interfaces provide the structural basis of essential bone functions. In the hierarchical structure of bone tissue, heterogeneities such as porosity or boundaries are found at scales ranging from nanometers to millimeters, all of which contributing to macroscopic properties. To date, however, the complexity or limitations of currently used imaging methods restrict our understanding of this functional integration. Here we address this issue using label-free third-harmonic generation (THG) microscopy. We find that the porous lacuno-canalicular network (LCN), revealing the geometry of osteocytes in the bone matrix, can be directly visualized in 3D with submicron precision over millimetric fields of view compatible with histology. THG also reveals interfaces delineating volumes formed at successive remodeling stages. Finally, we show that the structure of the LCN can be analyzed in relation with that of the extracellular matrix and larger-scale structures by simultaneously recording THG and second-harmonic generation (SHG) signals relating to the collagen organization.", "Fourier transform-second harmonic generation (FT-SHG) imaging is used as a technique for evaluating collagenase-induced injury in horse tendons. The differences in collagen fiber organization between normal and injured tendon are quantified. Results indicate that the organization of collagen fibers is regularly oriented in normal tendons and randomly organized in injured tendons. This is further supported through the use of additional metrics, in particular, the number of dark (no minimal signal) and isotropic (no preferred fiber orientation) regions in the images, and the ratio of forward-to-backward second-harmonic intensity. FT-SHG microscopy is also compared with the conventional polarized light microscopy and is shown to be more sensitive to assessing injured tendons than the latter. Moreover, sample preparation artifacts that affect the quantitative evaluation of collagen fiber organization can be circumvented by using FT-SHG microscopy. The technique has potential as an assessment tool for evaluating the impact of various injuries that affect collagen fiber organization.", "Abstract We propose the use of second-harmonic generation (SHG) microscopy for imaging collagen fibers in porcine femoral cortical bone. The technique is compared with scanning electron microscopy (SEM). SHG microscopy is shown to have excellent potential for bone imaging primarily due its intrinsic specificity to collagen fibers, which results in high contrast images without the need for specimen staining. Furthermore, this technique's ability to quantitatively assess collagen fiber organization is evaluated through an exploratory examination of bone structure as a function of age, from very young to mature bone. In particular, four different age groups: 1 month, 3.5 months, 6 months, and 30 months, were studied. Specifically, we employ the recently developed Fourier transform-second harmonic generation (FT-SHG) imaging technique for the quantification of the structural changes, and observe that as the bone develops, there is an overall reduction in porosity, the number of osteons increases, and the collagen fibers become comparatively more organized. It is also observed that the variations in structure across the whole cross-section of the bone increase with age. The results of this work show that quantitative SHG microscopy can serve as a valuable tool for evaluating the structural organization of collagen fibers in ex vivo bone studies.", "Fourier transform-second-harmonic generation imaging is employed to obtain quantitative metrics of collagen fibers in biological tissues. In particular, the preferred orientation and maximum spatial frequency of collagen fibers for selected regions of interest in porcine trachea, ear, and cornea are determined. These metrics remain consistent when applied to collagen fibers in the ear, which can be expected from observation. Collagen fibers in the trachea are more random with large standard deviations in orientation, and large variations in maximum spatial frequency. In addition, these metrics are used to investigate structural changes through a 3D stack of the cornea. This technique can be used as a quantitative marker to assess the structure of collagen fibers that may change due to damage from disease or physical injury.", "We present three-dimensional Fourier transform-second-harmonic generation (3D FT-SHG) imaging, a generalization of the previously reported two-dimensional FT-SHG, to quantify collagen fiber organization from 3D image stacks of biological tissues. The current implementation calculates 3D preferred orientation of a region of interest, and classifies regions of interest based on orientation anisotropy and average voxel intensity. Presented are some example applications of the technique which reveal the layered structure of collagen fibers in porcine sclera, and estimates the cut angle of porcine tendon tissues. This technique shows promising potential for studying biological tissues that contain fibrillar structures in 3D.", "Although the nonlinear optical effect known as second-harmonic generation (SHG) has been recognized since the earliest days of laser physics and was demonstrated through a microscope over 25 years ago, only in the past few years has it begun to emerge as a viable microscope imaging contrast mechanism for visualization of cell and tissue structure and function. Only small modifications are required to equip a standard laser-scanning two-photon microscope for second-harmonic imaging microscopy (SHIM). Recent studies of the three-dimensional in vivo structures of well-ordered protein assemblies, such as collagen, microtubules and muscle myosin, are beginning to establish SHIM as a nondestructive imaging modality that holds promise for both basic research and clinical pathology. Thus far the best signals have been obtained in a transmitted light geometry that precludes in vivo measurements on large living animals. This drawback may be addressed through improvements in the collection of SHG signals via an epi-illumination microscope configuration. In addition, SHG signals from certain membrane-bound dyes have been shown to be highly sensitive to membrane potential. Although this indicates that SHIM may become a valuable tool for probing cell physiology, the small signal size would limit the number of photons that could be collected during the course of a fast action potential. Better dyes and optimized microscope optics could ultimately lead to the imaging of neuronal electrical activity with SHIM.", "Abstract Second-harmonic generation imaging (SHG) captures triple helical collagen molecules near tissue surfaces. Biomedical research routinely utilizes various imaging software packages to quantify SHG signals for collagen content and distribution estimates in modern tissue samples including bone. For the first time using SHG, samples of modern, medieval, and ice age bones were imaged to test the applicability of SHG to ancient bone from a variety of ages, settings, and taxa. Four independent techniques including Raman spectroscopy, FTIR spectroscopy, radiocarbon dating protocols, and mass spectrometry-based protein sequencing, confirm the presence of protein, consistent with the hypothesis that SHG imaging detects ancient bone collagen. These results suggest that future studies have the potential to use SHG imaging to provide new insights into the composition of ancient bone, to characterize ancient bone disorders, to investigate collagen preservation within and between various taxa, and to monitor collagen decay regimes in different depositional environments." ] }
1907.12868
2965035401
Collagen fiber orientations in bones, visible with Second Harmonic Generation (SHG) microscopy, represent the inner structure and its alteration due to influences like cancer. While analyses of these orientations are valuable for medical research, it is not feasible to analyze the needed large amounts of local orientations manually. Since we have uncertain borders for these local orientations only rough regions can be segmented instead of a pixel-wise segmentation. We analyze the effect of these uncertain borders on human performance by a user study. Furthermore, we compare a variety of 2D and 3D methods such as classical approaches like Fourier analysis with state-of-the-art deep neural networks for the classification of local fiber orientations. We present a general way to use pretrained 2D weights in 3D neural networks, such as Inception-ResNet-3D a 3D extension of Inception-ResNet-v2. In a 10 fold cross-validation our two stage segmentation based on Inception-ResNet-3D and transferred 2D ImageNet weights achieves a human comparable accuracy.
@cite_13 state to be the first to analyze SHG images with neural networks. They estimated the elastic properties of collagenous tissue. A classification or segmentation of fibers were not part of their investigation.
{ "cite_N": [ "@cite_13" ], "mid": [ "2755176028" ], "abstract": [ "Abstract Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84 , and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images. Statement of Significance In this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves)." ] }
1907.12400
2965074212
In light of the rising demand for biometric-authentication systems, preventing face spoofing attacks is a critical issue for the safe deployment of face recognition systems. Here, we propose an efficient liveness detection algorithm that requires minimal hardware and only a small database, making it suitable for resource-constrained devices such as mobile phones. Utilizing one monocular visible light camera, the proposed algorithm takes two facial photos, one taken with a flash, the other without a flash. The proposed @math descriptor is constructed by leveraging two types of reflection: (i) specular reflections from the iris region that have a specific intensity distribution depending on liveness, and (ii) diffuse reflections from the entire face region that represents the 3D structure of a subject's face. Classifiers trained with @math descriptor outperforms other flash-based liveness detection algorithms on both an in-house database and on publicly available NUAA and Replay-Attack databases. Moreover, the proposed algorithm achieves comparable accuracy to that of an end-to-end, deep neural network classifier, while being approximately ten-times faster execution speed.
The current liveness detection technologies aimed against spoofing attacks are summarized below. Face spoofing attacks can be subdivided into two major categories: 2D attacks and 3D attacks. The former includes print-attacks and video-replay attacks, while the latter includes 3D spoofing mask attacks. Several publicly available face liveness databases simulate these attacks. To name a few, the NUAA @cite_20 and Print-Attack @cite_11 databases simulate photo attacks. The Replay-Attack @cite_28 and CASIA Face Anti-Spoofing @cite_26 datasets simulate replay attacks in addition to photo attacks. The 3D Mask Attack Database @cite_29 and HKBU-Mask Attack with Real World Variations @cite_14 simulate 3D mask attacks. Example countermeasures to each attack type are summarized below.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_28", "@cite_29", "@cite_20", "@cite_11" ], "mid": [ "1982209341", "2493468541", "2163487272", "2125320497", "1889383825", "1979339361" ], "abstract": [ "Face antispoofing has now attracted intensive attention, aiming to assure the reliability of face biometrics. We notice that currently most of face antispoofing databases focus on data with little variations, which may limit the generalization performance of trained models since potential attacks in real world are probably more complex. In this paper we release a face antispoofing database which covers a diverse range of potential attack variations. Specifically, the database contains 50 genuine subjects, and fake faces are made from the high quality records of the genuine faces. Three imaging qualities are considered, namely the low quality, normal quality and high quality. Three fake face attacks are implemented, which include warped photo attack, cut photo attack and video attack. Therefore each subject contains 12 videos (3 genuine and 9 fake), and the final database contains 600 video clips. Test protocol is provided, which consists of 7 scenarios for a thorough evaluation from all possible aspects. A baseline algorithm is also given for comparison, which explores the high frequency information in the facial region to determine the liveness. We hope such a database can serve as an evaluation platform for future researches in the literature.", "3D Mask face spoofing attack becomes new challenge and attracts more research interests in recent years. However, due to the deficiency number and limited variations of database, there are few methods be proposed to aim on it. Meanwhile, most of existing databases only concentrate on the anti-spoofing of different kinds of attacks and ignore the environmental changes in real world applications. In this paper, we build a new 3D mask anti-spoofing database with more variations to simulate the real world scenario. The proposed database contains 12 masks from two companies with different appearance quality. 7 Cameras from the stationary and mobile devices and 6 lighting settings that cover typical illumination conditions are also included. Therefore, each subject contains 42 (7 cameras * 6 lightings) genuine and 42 mask sequences and the total size is 1008 videos. Through the benchmark experiments, directions of the future study are pointed out. We plan to release the database as an platform to evaluate methods under different variations.", "Spoofing attacks are one of the security traits that biometric recognition systems are proven to be vulnerable to. When spoofed, a biometric recognition system is bypassed by presenting a copy of the biometric evidence of a valid user. Among all biometric modalities, spoofing a face recognition system is particularly easy to perform: all that is needed is a simple photograph of the user. In this paper, we address the problem of detecting face spoofing attacks. In particular, we inspect the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes. For this purpose, we introduce REPLAY-ATTACK, a novel publicly available face spoofing database which contains all the mentioned types of attacks. We conclude that LBP, with ∼15 Half Total Error Rate, show moderate discriminability when confronted with a wide set of attack types.", "The problem of detecting face spoofing attacks (presentation attacks) has recently gained a well-deserved popularity. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. In this paper, we inspect the spoofing potential of subject-specific 3D facial masks for 2D face recognition. Additionally, we analyze Local Binary Patterns based coun-termeasures using both color and depth data, obtained by Kinect. For this purpose, we introduce the 3D Mask Attack Database (3DMAD), the first publicly available 3D spoofing database, recorded with a low-cost depth camera. Extensive experiments on 3DMAD show that easily attainable facial masks can pose a serious threat to 2D face recognition systems and LBP is a powerful weapon to eliminate it.", "Spoofing with photograph or video is one of the most common manner to circumvent a face recognition system. In this paper, we present a real-time and non-intrusive method to address this based on individual images from a generic webcamera. The task is formulated as a binary classification problem, in which, however, the distribution of positive and negative are largely overlapping in the input space, and a suitable representation space is hence of importance. Using the Lambertian model, we propose two strategies to extract the essential information about different surface properties of a live human face or a photograph, in terms of latent samples. Based on these, we develop two new extensions to the sparse logistic regression model which allow quick and accurate spoof detection. Primary experiments on a large photo imposter database show that the proposed method gives preferable detection performance compared to others.", "A common technique to by-pass 2-D face recognition systems is to use photographs of spoofed identities. Unfortunately, research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade, there seems to exist no consensus on best practices, techniques or protocols for developing and testing spoofing-detectors for face recognition. We attribute the reason for this delay, partly, to the unavailability of public databases and protocols to study solutions and compare results. To this purpose we introduce the publicly available PRINT-ATTACK database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person's head movements and the scene context. The results are to be used as basis for comparison to other counter-measure techniques. The PRINT-ATTACK database contains 200 videos of real-accesses and 200 videos of spoof attempts using printed photographs of 50 different identities." ] }
1907.12400
2965074212
In light of the rising demand for biometric-authentication systems, preventing face spoofing attacks is a critical issue for the safe deployment of face recognition systems. Here, we propose an efficient liveness detection algorithm that requires minimal hardware and only a small database, making it suitable for resource-constrained devices such as mobile phones. Utilizing one monocular visible light camera, the proposed algorithm takes two facial photos, one taken with a flash, the other without a flash. The proposed @math descriptor is constructed by leveraging two types of reflection: (i) specular reflections from the iris region that have a specific intensity distribution depending on liveness, and (ii) diffuse reflections from the entire face region that represents the 3D structure of a subject's face. Classifiers trained with @math descriptor outperforms other flash-based liveness detection algorithms on both an in-house database and on publicly available NUAA and Replay-Attack databases. Moreover, the proposed algorithm achieves comparable accuracy to that of an end-to-end, deep neural network classifier, while being approximately ten-times faster execution speed.
The recent 3D reconstruction and printing technologies have given malicious users the ability to produce realistic spoofing masks @cite_9 . One example countermeasure against such a 3D attack is multispectral imaging. @cite_24 have reported the effectiveness of short-wave infrared (SWIR) imaging for detecting masks. Another approach is remote photoplethysmography (rPPG), which calculates pulse rhythms from periodic changes in face color @cite_10 . In this paper, however, we do not consider 3D attacks because they are less likely due to the high cost of producing 3D masks. Our work focuses on preventing photo attacks and replay attacks.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_10" ], "mid": [ "2511115389", "2907172176", "2518976115" ], "abstract": [ "Recent studies point out that spoofing attacks using facial masks still are a severe problem for current biometric face recognition (FR) systems. As such systems are becoming more frequently used, for example, for automated border crossing or access control to critical infrastructure, advanced anti-spoofing techniques are necessary to counter these attacks. This work presents a novel, cross-modal approach that enhances existing solutions for face verification and uses multispectral short wave infrared (SWIR) imaging to ensure the authenticity of a face even in the presence of partial disguises and masks. It is evaluated on a dataset containing 137 subjects and a variety of spoofing attacks. Using a commercial FR system, it successfully rejects all attempts to counterfeit a foreign face with a false acceptance rate FAR cf = 0 and most attempts to disguise the own identity with FAR dg = 1 at a false rejection rate of FRR < 5 using SWIR images for verification.", "With the advanced 3D reconstruction and printing technologies, creating a super-real 3D facial mask becomes feasible at an affordable cost. This brings a new challenge to face presentation attack detection (PAD) against 3D facial mask attack. As such, there is an urgent need to solve this problem as many face recognition systems have been deployed in real-world applications. Since this is a relatively new research problem, few studies has been conducted and reported. In order to attract more attentions on 3D mask face PAD, this book chapter summarizes the progress in the past few years, as well as publicly available datasets. Finally, some open problems in 3D mask attack are discussed.", "3D mask spoofing attack has been one of the main challenges in face recognition. Among existing methods, texture-based approaches show powerful abilities and achieve encouraging results on 3D mask face anti-spoofing. However, these approaches may not be robust enough in application scenarios and could fail to detect imposters with hyper-real masks. In this paper, we propose a novel approach to 3D mask face anti-spoofing from a new perspective, by analysing heartbeat signal through remote Photoplethysmography (rPPG). We develop a novel local rPPG correlation model to extract discriminative local heartbeat signal patterns so that an imposter can better be detected regardless of the material and quality of the mask. To further exploit the characteristic of rPPG distribution on real faces, we learn a confidence map through heartbeat signal strength to weight local rPPG correlation pattern for classification. Experiments on both public and self-collected datasets validate that the proposed method achieves promising results under intra and cross dataset scenario." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Ordinal regression is an approach aiming at classifying the data with natural ordered labels and plays an important role in many data-rich science domains. According to the commonly used taxonomy of ordinal regression @cite_62 , the existing methods are categorized into: naive approaches, ordinal binary decomposition approaches and threshold models. The naive approaches are the earliest approaches dealing with ordinal regression, which convert the ordinal labels into numeric and then implement standard regression or support vector regression @cite_8 @cite_16 . Since the distance between classes is unknown in this type of methods, the real values used for the labels may undermine regression performance. Moreover, these regression learners are sensitive to the label representation instead of their orders @cite_62 .
{ "cite_N": [ "@cite_62", "@cite_16", "@cite_8" ], "mid": [ "", "2153086202", "1570448133" ], "abstract": [ "", "When we have several related tasks, solving them simultaneously is shown to be more effective than solving them individually. This approach is called multi-task learning (MTL) and has been studied extensively. Existing approaches to MTL often treat all the tasks as uniformly related to each other and the relatedness of the tasks is controlled globally. For this reason, the existing methods can lead to undesired solutions when some tasks are not highly related to each other, and some pairs of related tasks can have significantly different solutions. In this paper, we propose a novel MTL algorithm that can overcome these problems. Our method makes use of a task network, which describes the relation structure among tasks. This allows us to deal with intricate relation structures in a systematic way. Furthermore, we control the relatedness of the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines (SVMs) and show that the optimization problem can be cast as a second order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated through simulations with protein super-family classification and ordinal regression problems.", "Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization" ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Ordinal binary decomposition approaches are proposed to decompose the ordinal labels into several binary ones that are then estimated by multiple models @cite_50 @cite_54 . For example, @cite_50 transforms the data from @math -classes ordinal problems to @math ordered binary classification problems and then they are trained in conjunction with a decision tree learner to encode the ordering of the original ranks, i.e., train @math binary classifiers using C4.5 algorithm. Threshold models are proposed based on the idea of approximating the real value predictor followed with partitioning the real line of ordinal values into segments. During the last decade, the two most popular threshold models are support vector machines (SVM) models @cite_45 @cite_44 @cite_56 @cite_22 and generalized linear models for ordinal regression @cite_46 @cite_23 @cite_17 @cite_32 ; the former is to find the hyperplane that separates the segments by maximizing margin using the loss and the latter is to predict the ordinal labels by maximizing the likelihood given the training data.
{ "cite_N": [ "@cite_22", "@cite_54", "@cite_32", "@cite_56", "@cite_44", "@cite_45", "@cite_50", "@cite_23", "@cite_46", "@cite_17" ], "mid": [ "2061879449", "2128186735", "2012838480", "1980896222", "2023508744", "2124105163", "1997855593", "2092408046", "2162086569", "2049329064" ], "abstract": [ "Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learning due to the complicated formulations of SVOR. Recently, an interesting accurate on-line algorithm was proposed for training @math -support vector classification ( @math -SVC), which can handle a quadratic formulation with a pair of equality constraints. In this paper, we first present a modified SVOR formulation based on a sum-of-margins strategy. The formulation has multiple constraints, and each constraint includes a mixture of an equality and an inequality. Then, we extend the accurate on-line @math -SVC algorithm to the modified formulation, and propose an effective incremental SVOR algorithm. The algorithm can handle a quadratic formulation with multiple constraints, where each constraint is constituted of an equality and an inequality. More importantly, it tackles the conflicts between the equality and inequality constraints. We also provide the finite convergence analysis for the algorithm. Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms. Meanwhile, the modified formulation has better accuracy than the existing incremental SVOR algorithm, and is as accurate as the sum-of-margins based formulation of Shashua and Levin.", "We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0 1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework.", "There have been many studies that have documented the application of crash severity models to explore the relationship between accident severity and its contributing factors. Although a large amount of work has been done on different types of models, no research has been conducted about quantifying the sample size requirements for crash severity modeling. Similar to count data models, small data sets could significantly influence model performance. The objective of this study is therefore to examine the effects of sample size on the three most commonly used crash severity models: multinomial logit, ordered probit and mixed logit models. The study objective is accomplished via a Monte-Carlo approach using simulated and observed crash data. The results of this study are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which type is used. Furthermore, among the three models, the mixed logit model requires the largest sample size, while the ordered probit model requires the lowest sample size. The sample size requirement for the multinomial logit model is located between these two models.", "In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches.", "In this paper, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The SMO algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on benchmark datasets verify the usefulness of these approaches.", "We discuss the problem of ranking k instances with the use of a \"large margin\" principle. We introduce two main approaches: the first is the \"fixed margin\" policy in which the margin of the closest neighboring classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for k - 1 different margins where the sum of margins is maximized. This approach is shown to reduce to v-SVM when the number of classes k - 2. Both approaches are optimal in size of 2l where l is the total number of training examples. Experiments performed on visual classification and \"collaborative filtering\" show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class classification.", "Machine learning methods for classification problems commonly assume that the class values are unordered. However, in many practical applications the class values do exhibit a natural order--for example, when learning how to grade. The standard approach to ordinal classification converts the class value into a numeric quantity and applies a regression learner to the transformed data, translating the output back into a discrete class value in a post-processing step. A disadvantage of this method is that it can only be applied in conjunction with a regression scheme. In this paper we present a simple method that enables standard classification algorithms to make use of ordering information in class attributes. By applying it in conjunction with a decision tree learner we show that it outperforms the naive approach, which treats the class values as an unordered set. Compared to special-purpose algorithms for ordinal classification our method has the advantage that it can be applied without any modification to the underlying learning scheme.", "The paper re-examines existing estimators for the panel data fixed effects ordered logit model, proposes a new one, and studies the sampling properties of these estimators in a series of Monte Carlo simulations. There are two main findings. First, we show that some of the estimators used in the literature are inconsistent, and provide reasons for the inconsistency. Second, the new estimator is never outperformed by the others, seems to be substantially more immune to small sample bias than other consistent estimators, and is easy to implement. The empirical relevance is illustrated in an application to the effect of unemployment on life satisfaction.", "This article describes the gologit2 program for generalized ordered logit models. gologit2 is inspired by Vincent Fu’s gologit routine (Stata Technical Bulletin Reprints 8: 160–164) and is backward compatible with it but offers several additional powerful options. A major strength of gologit2 is that it can fit three special cases of the generalized model: the proportional odds parallel-lines model, the partial proportional odds model, and the logistic regression model. Hence, gologit2 can fit models that are less restrictive than the parallel-lines models fitted by ologit (whose assumptions are often violated) but more parsimonious and interpretable than those fitted by a nonordinal method, such as multinomial logistic regression (i.e., mlogit). Other key advantages of gologit2 include support for linear constraints, survey data estimation, and the computation of estimated probabilities via the predict command.", "This paper describes the use of ordered probit models to examine the risk of different injury levels sustained under all crash types, two-vehicle crashes, and single-vehicle crashes. The results suggest that pickups and sport utility vehicles are less safe than passenger cars under single-vehicle crash conditions. In two-vehicle crashes, however, these vehicle types are associated with less severe injuries for their drivers and more severe injuries for occupants of their collision partners. Other conclusions also are presented; for example, the results indicate that males and younger drivers in newer vehicles at lower speeds sustain less severe injuries." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
In @cite_45 , support vector ordinal regression (SVOR) is achieved by finding multiple thresholds that partition the real line of ordinal values into several consecutive intervals for representing ordered segments; however, it does not consider the ordinal inequalities on the thresholds. In @cite_44 @cite_56 , the authors take into account of ordinal inequalities on the thresholds and propose two approaches using two types of thresholds for SVOR by introducing explicit constraints. To deal with incremental SVOR learning caused by the complicated formulations of SVOR, @cite_22 proposes a modified SVOR formulation based on a sum-of-margins strategy to solve the computational scalability issue of SVOR.
{ "cite_N": [ "@cite_44", "@cite_45", "@cite_22", "@cite_56" ], "mid": [ "2023508744", "2124105163", "2061879449", "1980896222" ], "abstract": [ "In this paper, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The SMO algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on benchmark datasets verify the usefulness of these approaches.", "We discuss the problem of ranking k instances with the use of a \"large margin\" principle. We introduce two main approaches: the first is the \"fixed margin\" policy in which the margin of the closest neighboring classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for k - 1 different margins where the sum of margins is maximized. This approach is shown to reduce to v-SVM when the number of classes k - 2. Both approaches are optimal in size of 2l where l is the total number of training examples. Experiments performed on visual classification and \"collaborative filtering\" show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class classification.", "Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learning due to the complicated formulations of SVOR. Recently, an interesting accurate on-line algorithm was proposed for training @math -support vector classification ( @math -SVC), which can handle a quadratic formulation with a pair of equality constraints. In this paper, we first present a modified SVOR formulation based on a sum-of-margins strategy. The formulation has multiple constraints, and each constraint includes a mixture of an equality and an inequality. Then, we extend the accurate on-line @math -SVC algorithm to the modified formulation, and propose an effective incremental SVOR algorithm. The algorithm can handle a quadratic formulation with multiple constraints, where each constraint is constituted of an equality and an inequality. More importantly, it tackles the conflicts between the equality and inequality constraints. We also provide the finite convergence analysis for the algorithm. Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms. Meanwhile, the modified formulation has better accuracy than the existing incremental SVOR algorithm, and is as accurate as the sum-of-margins based formulation of Shashua and Levin.", "In this letter, we propose two new support vector approaches for ordinal regression, which optimize multiple thresholds to define parallel discriminant hyperplanes for the ordinal scales. Both approaches guarantee that the thresholds are properly ordered at the optimal solution. The size of these optimization problems is linear in the number of training samples. The sequential minimal optimization algorithm is adapted for the resulting optimization problems; it is extremely easy to implement and scales efficiently as a quadratic function of the number of examples. The results of numerical experiments on some benchmark and real-world data sets, including applications of ordinal regression to information retrieval, verify the usefulness of these approaches." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Generalized linear models perform ordinal regression by fitting a coefficient vector and a set of thresholds, e.g., ordered logit @cite_46 @cite_23 and ordered probit @cite_17 @cite_32 . The margin functions are defined based on the cumulative probability of training instances' ordinal labels. Different link functions are then chosen for different models, i.e., logistic cumulative distribution function (CDF) for ordered logit and standard normal CDF for ordered probit. Finally, maximum likelihood principal is used for training.
{ "cite_N": [ "@cite_46", "@cite_32", "@cite_23", "@cite_17" ], "mid": [ "2162086569", "2012838480", "2092408046", "2049329064" ], "abstract": [ "This article describes the gologit2 program for generalized ordered logit models. gologit2 is inspired by Vincent Fu’s gologit routine (Stata Technical Bulletin Reprints 8: 160–164) and is backward compatible with it but offers several additional powerful options. A major strength of gologit2 is that it can fit three special cases of the generalized model: the proportional odds parallel-lines model, the partial proportional odds model, and the logistic regression model. Hence, gologit2 can fit models that are less restrictive than the parallel-lines models fitted by ologit (whose assumptions are often violated) but more parsimonious and interpretable than those fitted by a nonordinal method, such as multinomial logistic regression (i.e., mlogit). Other key advantages of gologit2 include support for linear constraints, survey data estimation, and the computation of estimated probabilities via the predict command.", "There have been many studies that have documented the application of crash severity models to explore the relationship between accident severity and its contributing factors. Although a large amount of work has been done on different types of models, no research has been conducted about quantifying the sample size requirements for crash severity modeling. Similar to count data models, small data sets could significantly influence model performance. The objective of this study is therefore to examine the effects of sample size on the three most commonly used crash severity models: multinomial logit, ordered probit and mixed logit models. The study objective is accomplished via a Monte-Carlo approach using simulated and observed crash data. The results of this study are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which type is used. Furthermore, among the three models, the mixed logit model requires the largest sample size, while the ordered probit model requires the lowest sample size. The sample size requirement for the multinomial logit model is located between these two models.", "The paper re-examines existing estimators for the panel data fixed effects ordered logit model, proposes a new one, and studies the sampling properties of these estimators in a series of Monte Carlo simulations. There are two main findings. First, we show that some of the estimators used in the literature are inconsistent, and provide reasons for the inconsistency. Second, the new estimator is never outperformed by the others, seems to be substantially more immune to small sample bias than other consistent estimators, and is easy to implement. The empirical relevance is illustrated in an application to the effect of unemployment on life satisfaction.", "This paper describes the use of ordered probit models to examine the risk of different injury levels sustained under all crash types, two-vehicle crashes, and single-vehicle crashes. The results suggest that pickups and sport utility vehicles are less safe than passenger cars under single-vehicle crash conditions. In two-vehicle crashes, however, these vehicle types are associated with less severe injuries for their drivers and more severe injuries for occupants of their collision partners. Other conclusions also are presented; for example, the results indicate that males and younger drivers in newer vehicles at lower speeds sustain less severe injuries." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Recently, MTL has been combined with many deep learning approaches @cite_10 . MTL can be implemented in the DNN based approaches in two ways, i.e., soft and hard parameter sharing of hidden layers. In the soft parameter sharing, all tasks do not share representation layers and the distance among their own representation layers are constrained to encourage the parameters to be similar @cite_10 , e.g., @cite_20 and @cite_21 use @math -norm and the trace norm, respectively.
{ "cite_N": [ "@cite_21", "@cite_10", "@cite_20" ], "mid": [ "2432541215", "2624871570", "2251324968" ], "abstract": [ "We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way.", "Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.", "Training a high-accuracy dependency parser requires a large treebank. However, these are costly and time-consuming to build. We propose a learning method that needs less data, based on the observation that there are underlying shared structures across languages. We exploit cues from a different source language in order to guide the learning process. Our model saves at least half of the annotation effort to reach the same accuracy compared with using the purely supervised method." ] }
1907.12508
2964542684
Many real-world datasets are labeled with natural orders, i.e., ordinal labels. Ordinal regression is a method to predict ordinal labels that finds a wide range of applications in data-rich science domains, such as medical, social and economic sciences. Most existing approaches work well for a single ordinal regression task. However, they ignore the task relatedness when there are multiple related tasks. Multi-task learning (MTL) provides a framework to encode task relatedness, to bridge data from all tasks, and to simultaneously learn multiple related tasks to improve the generalization performance. Even though MTL methods have been extensively studied, there is barely existing work investigating MTL for data with ordinal labels. We tackle multiple ordinal regression problems via sparse and deep multi-task approaches, i.e., two regularized multi-task ordinal regression (RMTOR) models for small datasets and two deep neural networks based multi-task ordinal regression (DMTOR) models for large-scale datasets. The performance of the proposed multi-task ordinal regression models (MTOR) is demonstrated on three real-world medical datasets for multi-stage disease diagnosis. Our experimental results indicate that our proposed MTOR models markedly improve the prediction performance comparing with single-task learning (STL) ordinal regression models.
Hard parameter sharing is the most commonly used approach in DNN based MTL @cite_10 . In the hard parameter sharing, all tasks share the representation layers to reduce the risk of overfitting @cite_24 and keep some task-specific layers to preserve characteristics of each task @cite_40 . In this paper, we use the hard parameters sharing for DMTOR. In the all aforementioned methods and other related works, the learning tasks are either classification or standard regression. Here, in this paper, the learning tasks are multiple ordinal regression problems. We propose a set of novel MTOR models in Section and Section to solve multiple multi-ordered classification problems simultaneously. Moreover, in the Section , the multi-stage disease diagnosis are handled for experiments using the proposed MTOR models, i.e., RMTOR and DMTOR models.
{ "cite_N": [ "@cite_24", "@cite_40", "@cite_10" ], "mid": [ "2143419558", "2951657494", "2624871570" ], "abstract": [ "A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks.", "Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them. In the context of deep neural networks, this idea is often realized by hand-designed network architectures with layers that are shared across tasks and branches that encode task-specific features. However, the space of possible multi-task deep architectures is combinatorially large and often the final architecture is arrived at by manual exploration of this space subject to designer's bias, which can be both error-prone and tedious. In this work, we propose a principled approach for designing compact multi-task deep learning architectures. Our approach starts with a thin network and dynamically widens it in a greedy manner during training using a novel criterion that promotes grouping of similar tasks together. Our Extensive evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models.", "Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
In their review, @cite_2 of methods and applications for learning on unbalanced data, authors cite as domains extensively explored the topic Taxonomy, Chemical engineering, financial management, information technology, energy management, security management.Most of the related experimentation tend to show that since years, that decision trees are the best performing ML algorithms on unbalanced data sets.
{ "cite_N": [ "@cite_2" ], "mid": [ "2562319768" ], "abstract": [ "527 articles related to imbalanced data and rare events are reviewed.Viewing reviewed papers from both technical and practical perspectives.Summarizing existing methods and corresponding statistics by a new taxonomy idea.Categorizing 162 application papers into 13 domains and giving introduction.Some opening questions are discussed at the end of this manuscript. Rare events, especially those that could potentially negatively impact society, often require humans decision-making responses. Detecting rare events can be viewed as a prediction task in data mining and machine learning communities. As these events are rarely observed in daily life, the prediction task suffers from a lack of balanced data. In this paper, we provide an in depth review of rare event detection from an imbalanced learning perspective. Five hundred and seventeen related papers that have been published in the past decade were collected for the study. The initial statistics suggested that rare events detection and imbalanced learning are concerned across a wide range of research areas from management science to engineering. We reviewed all collected papers from both a technical and a practical point of view. Modeling methods discussed include techniques such as data preprocessing, classification algorithms and model evaluation. For applications, we first provide a comprehensive taxonomy of the existing application domains of imbalanced learning, and then we detail the applications for each category. Finally, some suggestions from the reviewed papers are incorporated with our experiences and judgments to offer further research directions for the imbalanced learning and rare event detection fields." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
In @cite_5 , authors outline the performance of several popular decision tree splitting criteria – information gain, Gini measure, and DKM – can be used to form decision trees, and improve performances of tree construction method applied to unbalanced data.
{ "cite_N": [ "@cite_5" ], "mid": [ "1831729862" ], "abstract": [ "Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems. This paper compares the performance of several popular decision tree splitting criteria --- information gain, Gini measure, and DKM --- and identifies a new skew insensitive measure in Hellinger distance. We outline the strengths of Hellinger distance in class imbalance, proposes its application in forming decision trees, and performs a comprehensive comparative analysis between each decision tree construction method. In addition, we consider the performance of each tree within a powerful sampling wrapper framework to capture the interaction of the splitting metric and sampling. We evaluate over this wide range of datasets and determine which operate best under class imbalance." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
Authors of @cite_6 discuss the use of 108 different classification models to determine the most fitting model, capable of dealing with the imbalanced data issue coming from a biomedical literature corpus and achieve the most satisfying results with a LMT decision tree classifier previously used with success in a unbalanced multi-label classification task related to taxonomy by @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2071049913", "2062707524" ], "abstract": [ "Numerous initiatives have allowed users to share knowledge or opinions using collaborative platforms. In most cases, the users provide a textual description of their knowledge, following very limited or no constraints. Here, we tackle the classification of documents written in such an environment. As a use case, our study is made in the context of text mining evaluation campaign material, related to the classification of cooking recipes tagged by users from a collaborative website. This context makes some of the corpus specificities difficult to model for machine-learning-based systems and keyword or lexical-based systems. In particular, different authors might have different opinions on how to classify a given document. The systems presented hereafter were submitted to the D´Efi Fouille de Textes 2013 evaluation campaign, where they obtained the best overall results, ranking first on task 1 and second on task 2. In this paper, we explain our approach for building relevant and effective systems dealing with such a corpus.", "This paper presents a machine learning system for supporting the first task of the biological literature manual curation process, called triage. We compare the performance of various classification models, by experimenting with dataset sampling factors and a set of features, as well as three different machine learning algorithms (Naive Bayes, Support Vector Machine and Logistic Model Trees). The results show that the most fitting model to handle the imbalanced datasets of the triage classification task is obtained by using domain relevant features, an under-sampling technique, and the Logistic Model Trees algorithm." ] }
1907.12363
2966207284
Training models on highly unbalanced data is admitted to be a challenging task for machine learning algorithms. Current studies on deep learning mainly focus on data sets with balanced class labels, or unbalanced data but with massive amount of samples available, like in speech recognition. However, the capacities of deep learning on imbalanced data with little samples is not deeply investigated in literature, while it is a very common application context, in numerous industries. To contribute to fill this gap, this paper compares the performances of several popular machine learning algorithms previously applied with success to unbalanced data set with deep learning algorithms. We conduct those experiments on an highly unbalanced data set, used for credit scoring. We evaluate various configuration including neural network optimisation techniques and try to determine their capacities when they operate with imbalanced corpora.
In the domain of credit applications, authors of @cite_11 compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets.
{ "cite_N": [ "@cite_11" ], "mid": [ "2052611008" ], "abstract": [ "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction.Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly under-sampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman's statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques.The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers." ] }