aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1908.08705
2969664989
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models.
Another interesting approach @cite_14 uses the concept of nested adversarial examples where separate non-overlapping adversarial perturbations are generated for close and far distances. This attack is designed for Faster R-CNN and YOLOv3 @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_14" ], "mid": [ "2783784437", "2963297642", "2951954400", "2906586812" ], "abstract": [ "Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the @math distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of @math distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large @math distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.", "Recent studies show that widely used Deep neural networks (DNNs) are vulnerable to the carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations. Different defense methods have also been explored to defend against such adversarial attacks. While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.", "When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp norm distortion as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results onMNIST, CIFAR-10, and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions)through adversarial saliency map (, 2016b) and class activation map(, 2016).", "In this paper, we proposed the first practical adversarial attacks against object detectors in realistic situations: the adversarial examples are placed in different angles and distances, especially in the long distance (over 20m) and wide angles 120 degree. To improve the robustness of adversarial examples, we proposed the nested adversarial examples and introduced the image transformation techniques. Transformation methods aim to simulate the variance factors such as distances, angles, illuminations, etc., in the physical world. Two kinds of attacks were implemented on YOLO V3, a state-of-the-art real-time object detector: hiding attack that fools the detector unable to recognize the object, and appearing attack that fools the detector to recognize the non-existent object. The adversarial examples are evaluated in three environments: indoor lab, outdoor environment, and the real road, and demonstrated to achieve the success rate up to 92.4 based on the distance range from 1m to 25m. In particular, the real road testing of hiding attack on a straight road and a crossing road produced the success rate of 75 and 64 respectively, and the appearing attack obtained the success rates of 63 and 81 respectively, which we believe, should catch the attention of the autonomous driving community." ] }
1908.08705
2969664989
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models.
A few works are devoted to more complex approaches. One of such works @cite_29 proposes to use EOT, NPS, and TV loss for fooling YOLOv2-based person detector. Another one @cite_2 is devoted to fooling the Face ID system using adversarial generative nets (a sort of GANs @cite_18 ) where the generator produces the eyeglasses frame perturbation.
{ "cite_N": [ "@cite_29", "@cite_18", "@cite_2" ], "mid": [ "2797328537", "2884519271", "2890883923", "2782017896" ], "abstract": [ "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to \"disappear\" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5 and 63.5 of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 of the video frames in a controlled lab environment, and 40.2 of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.", "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems. Code related to this paper is available at: https: github.com shangtse robust-physical-attack.", "In this paper we show that misclassification attacks against face-recognition systems based on deep neural networks (DNNs) are more dangerous than previously demonstrated, even in contexts where the adversary can manipulate only her physical appearance (versus directly manipulating the image input to the DNN). Specifically, we show how to create eyeglasses that, when worn, can succeed in targeted (impersonation) or untargeted (dodging) attacks while improving on previous work in one or more of three facets: (i) inconspicuousness to onlooking observers, which we test through a user study; (ii) robustness of the attack against proposed defenses; and (iii) scalability in the sense of decoupling eyeglass creation from the subject who will wear them, i.e., by creating \"universal\" sets of eyeglasses that facilitate misclassification. Central to these improvements are adversarial generative nets, a method we propose to generate physically realizable attack artifacts (here, eyeglasses) automatically." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Stream Processing. There are many important problems for stream data processing, including event detection @cite_35 , outlier detection @cite_16 , top- @math query @cite_37 , join @cite_38 @cite_22 , skyline query @cite_12 , nearest neighbor query @cite_33 , aggregate query @cite_17 , and so on. These works usually assume that stream data are either certain or uncertain. To our best knowledge, they cannot be directly applied to our Join-iDS problem, under the semantics of incomplete data streams.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_38", "@cite_22", "@cite_33", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2005044405", "2129553531", "2049040861", "2425269155" ], "abstract": [ "Join processing in the streaming environment has many practical applications such as data cleaning and outlier detection. Due to the inherent uncertainty in the real-world data, it has become an increasingly important problem to consider the join processing on uncertain data streams, where the incoming data at each timestamp are uncertain and imprecise. Different from the static databases, processing uncertain data streams has its own requirements such as the limited memory, small response time, and so on. To tackle the challenges with respect to efficiency and effectiveness, in this paper, we formalize the problem of join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data, and propose effective pruning methods to filter out false alarms. We integrate the pruning methods into an efficient query procedure for incrementally maintaining USJ answers. Extensive experiments have been conducted to demonstrate the efficiency and effectiveness of our approaches.", "Similarity join processing in the streaming environment has many practical applications such as sensor networks, object tracking and monitoring, and so on. Previous works usually assume that stream processing is conducted over precise data. In this paper, we study an important problem of similarity join processing on stream data that inherently contain uncertainty (or called uncertain data streams), where the incoming data at each time stamp are uncertain and imprecise. Specifically, we formalize this problem as join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data. To tackle the challenges with respect to efficiency and effectiveness such as limited memory and small response time, we propose effective pruning methods on both object and sample levels to filter out false alarms. We integrate the proposed pruning methods into an efficient query procedure that can incrementally maintain the USJ answers. Most importantly, we further design a novel strategy, namely, adaptive superset prejoin (ASP), to maintain a superset of USJ candidate pairs. ASP is in light of our proposed formal cost model such that the average USJ processing cost is minimized. We have conducted extensive experiments to demonstrate the efficiency and effectiveness of our proposed approaches.", "Efficiently processing continuous k-nearest neighbor queries on data streams is important in many application domains, e. g. for network intrusion detection. Usually not all valid data objects from the stream can be kept in main memory. Therefore, most existing solutions are approximative. In this paper, we propose an efficient method for exact k-NN monitoring. Our method is based on three ideas, (1) selecting exactly those objects from the stream which are able to become the nearest neighbor of one or more continuous queries and storing them in a skyline data structure, (2) delaying to process those objects which are not immediately nearest neighbors of any query, and (3) indexing the queries rather than the streaming objects. In an extensive experimental evaluation we demonstrate that our method is applicable on high throughput data streams requiring only very limited storage.", "Real-time analytics of anomalous phenomena on streaming data typically relies on processing a large variety of continuous outlier detection requests, each configured with different parameter settings. The processing of such complex outlier analytics workloads is resource consuming due to the algorithmic complexity of the outlier mining process. In this work we propose a sharing-aware multi-query execution strategy for outlier detection on data streams called SOP. A key insight of SOP is to transform the problem of handling a multi-query outlier analytics workload into a single-query skyline computation problem. We prove that the output of the skyline computation process corresponds to the minimal information needed for determining the outlier status of any point in the stream. Based on this new formulation, we design a customized skyline algorithm called K-SKY that leverages the domination relationships among the streaming data points to minimize the number of data points that must be evaluated for supporting multi-query outlier detection. Based on this K-SKY algorithm, our SOP solution achieves minimal utilization of both computational and memory resources for the processing of these complex outlier analytics workload. Our experimental study demonstrates that SOP consistently outperforms the state-of-art solutions by three orders of magnitude in CPU time, while only consuming 5 of their memory footprint - a clear win-win. Furthermore, SOP is shown to scale to large workloads composed of thousands of parameterized queries." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Differential Dependency. Differential dependency (DD) @cite_18 is a valuable tool for data imputation @cite_2 , data cleaning @cite_14 , data repairing @cite_39 , and so on. @cite_2 used the DDs to fill the missing attributes of incomplete objects on static data set via some detected neighbors satisfying the distance constraints on determinant attributes. @cite_39 @cite_23 also explored to repair labels of graph nodes. @cite_14 cleaned databases by removing inconsistent records that violate DDs. Unlike these works targeting at static database, we apply DD-based imputation to the streaming environment, which makes our Join-iDS problem more challenging.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_39", "@cite_23", "@cite_2" ], "mid": [ "2108132403", "2282784388", "2047745978", "2017509273" ], "abstract": [ "The importance of difference semantics (e.g., “similar” or “dissimilar”) has been recently recognized for declaring dependencies among various types of data, such as numerical values or text values. We propose a novel form of Differential Dependencies (dds), which specifies constraints on difference, called differential functions, instead of identification functions in traditional dependency notations like functional dependencies. Informally, a differential dependency states that if two tuples have distances on attributes X agreeing with a certain differential function, then their distances on attributes Y should also agree with the corresponding differential function on Y. For example, [date(l 7)]→[price( In this article, we first address several theoretical issues of differential dependencies, including formal definitions of dds and differential keys, subsumption order relation of differential functions, implication of dds, closure of a differential function, a sound and complete inference system, and minimal cover for dds. Then, we investigate a practical problem, that is, how to discover dds and differential keys from a given dataset. Due to the intrinsic hardness, we develop several pruning methods to improve the discovery efficiency in practice. Finally, through an extensive experimental evaluation on real datasets, we demonstrate the discovery performance and the effectiveness of dds in several real applications.", "Incomplete information often occur along with many database applications, e.g., in data integration, data cleaning or data exchange. The idea of data imputation is to fill the missing data with the values of its neighbors who share the same information. Such neighbors could either be identified certainly by editing rules or statistically by relational dependency networks. Unfortunately, owing to data sparsity, the number of neighbors (identified w.r.t. value equality) is rather limited, especially in the presence of data values with variances. In this paper, we argue to extensively enrich similarity neighbors by similarity rules with tolerance to small variations. More fillings can thus be acquired that the aforesaid equality neighbors fail to reveal. To fill the missing values more, we study the problem of maximizing the missing data imputation. Our major contributions include (1) the np-hardness analysis on solving and approximating the problem, (2) exact algorithms for tackling the problem, and (3) efficient approximation with performance guarantees. Experiments on real and synthetic data sets demonstrate that the filling accuracy can be improved.", "We study the problem of repairing an inconsistent database that violates a set of functional dependencies by making the smallest possible value modifications. For an inconsistent database, we define an optimum repair as a database that satisfies the functional dependencies, and minimizes, among all repairs, a distance measure that depends on the number of corrections made in the database and the weights of tuples modified. We show that like other versions of the repair problem, checking the existence of a repair within a certain distance of a database is NP-complete. We also show that finding a constant-factor approximation for the optimum repair for any set of functional dependencies is NP-hard. Furthermore, there is a small constant and a set of functional dependencies, for which finding an approximate solution for the optimum repair within the factor of that constant is also NP-hard. Then we present an approximation algorithm that for a fixed set of functional dependencies and an arbitrary input inconsistent database, produces a repair whose distance to the database is within a constant factor of the optimum repair distance. We finally show how the approximation algorithm can be used in data cleaning using a recent extension to functional dependencies, called conditional functional dependencies.", "Existing differential privacy (DP) studies mainly consider aggregation on data sets where each entry corresponds to a particular participant to be protected. In many situations, a user may pose a relational algebra query on a database with sensitive data, and desire differentially private aggregation on the result of the query. However, no existing work is able to release such aggregation when the query contains unrestricted join operations. This severely limits the applications of existing DP techniques because many data analysis tasks require unrestricted joins. One example is subgraph counting on a graph. Furthermore, existing methods for differentially private subgraph counting support only edge DP and are subject to very simple subgraphs. Until recent, whether any nontrivial graph statistics can be released with reasonable accuracy for arbitrary kind of input graphs under node DP was still an open problem. In this paper, we propose a novel differentially private mechanism that supports unrestricted joins, to release an approximation of a linear statistic of the result of some positive relational algebra calculation over a sensitive database. The error bound of the approximate answer is roughly proportional to the empirical sensitivity of the query --- a new notion that measures the maximum possible change to the query answer when a participant withdraws its data from the sensitive database. For subgraph counting, our mechanism provides a solution to achieve node DP, for any kind of subgraphs." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Join Over Certain Uncertain Databases. The join operator was traditionally used in relational databases @cite_15 or data streams @cite_38 . The join predicate may follow equality semantics between attributes of tuples or data objects. According to predicate constraints, join over uncertain databases @cite_11 @cite_22 can be classified into two categories, probabilistic join query (PJQ) and probabilistic similarity join (PSJ), which return pairs of joining objects that are identical or similar (e.g., within @math -distance from each other), resp., with high confidences.
{ "cite_N": [ "@cite_38", "@cite_15", "@cite_22", "@cite_11" ], "mid": [ "2005044405", "2129553531", "1561514023", "2015372451" ], "abstract": [ "Join processing in the streaming environment has many practical applications such as data cleaning and outlier detection. Due to the inherent uncertainty in the real-world data, it has become an increasingly important problem to consider the join processing on uncertain data streams, where the incoming data at each timestamp are uncertain and imprecise. Different from the static databases, processing uncertain data streams has its own requirements such as the limited memory, small response time, and so on. To tackle the challenges with respect to efficiency and effectiveness, in this paper, we formalize the problem of join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data, and propose effective pruning methods to filter out false alarms. We integrate the pruning methods into an efficient query procedure for incrementally maintaining USJ answers. Extensive experiments have been conducted to demonstrate the efficiency and effectiveness of our approaches.", "Similarity join processing in the streaming environment has many practical applications such as sensor networks, object tracking and monitoring, and so on. Previous works usually assume that stream processing is conducted over precise data. In this paper, we study an important problem of similarity join processing on stream data that inherently contain uncertainty (or called uncertain data streams), where the incoming data at each time stamp are uncertain and imprecise. Specifically, we formalize this problem as join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data. To tackle the challenges with respect to efficiency and effectiveness such as limited memory and small response time, we propose effective pruning methods on both object and sample levels to filter out false alarms. We integrate the proposed pruning methods into an efficient query procedure that can incrementally maintain the USJ answers. Most importantly, we further design a novel strategy, namely, adaptive superset prejoin (ASP), to maintain a superset of USJ candidate pairs. ASP is in light of our proposed formal cost model such that the average USJ processing cost is minimized. We have conducted extensive experiments to demonstrate the efficiency and effectiveness of our proposed approaches.", "An important database primitive for commonly used feature databases is the similarity join. It combines two datasets based on some similarity predicate into one set such that the new set contains pairs of objects of the two original sets. In many different application areas, e.g. sensor databases, location based services or face recognition systems, distances between objects have to be computed based on vague and uncertain data. In this paper, we propose to express the similarity between two uncertain objects by probability density functions which assign a probability value to each possible distance value. By integrating these probabilistic distance functions directly into the join algorithms the full information provided by these functions is exploited. The resulting probabilistic similarity join assigns to each object pair a probability value indicating the likelihood that the object pair belongs to the result set. As the computation of these probability values is very expensive, we introduce an efficient join processing strategy exemplarily for the distance-range join. In a detailed experimental evaluation, we demonstrate the benefits of our probabilistic similarity join. The experiments show that we can achieve high quality join results with rather low computational cost.", "The join operation is one of the fundamental relational database query operations. It facilitates the retrieval of information from two different relations based on a Cartesian product of the two relations. The join is one of the most diffidult operations to implement efficiently, as no predefined links between relations are required to exist (as they are with network and hierarchical systems). The join is the only relational algebra operation that allows the combining of related tuples from relations on different attribute schemes. Since it is executed frequently and is expensive, much research effort has been applied to the optimization of join processing. In this paper, the different kinds of joins and the various implementation techniques are surveyed. These different methods are classified based on how they partition tuples from different relations. Some require that all tuples from one be compared to all tuples from another; other algorithms only compare some tuples from each. In addition, some techniques perform an explicit partitioning, whereas others are implicit." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
PSJ has received much attention in many domains. @cite_19 applied PSJ to integrate heterogeneous RDF graphs by introducing an equivalent semantics for RDF graphs. @cite_31 proposed an effective filter-based method for high-dimensional vector similarity join. @cite_30 explored how to leverage relations between sets to proceed the exact set similarity. @cite_5 proposed a prefix tree index to join multi-attribute Data. @cite_40 applied PSJ in trajectory similarity join in spatial networks via some search space pruning techniques. @cite_9 proposed a join approach for massive high-dimensional data, based on a particular order of data points via a grid. Different from @cite_9 that uses the grid cell for sorting data points, in our work, we designed a grid variant, @math -grid, which stores additional information (e.g., queues with imputed objects) specific for incomplete data streams, and supports the dynamic maintenance of candidate join answers.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_19", "@cite_40", "@cite_5", "@cite_31" ], "mid": [ "2138646997", "2129553531", "2200550062", "2116440837" ], "abstract": [ "Similarity join (SJ) in time-series databases has a wide spectrum of applications such as data cleaning and mining. Specifically, an SJ query retrieves all pairs of (sub)sequences from two time-series databases that epsiv-match with each other, where epsiv is the matching threshold. Previous work on this problem usually considers static time-series databases, where queries are performed either on disk-based multidimensional indexes built on static data or by nested loop join (NLJ) without indexes. SJ over multiple stream time series, which continuously outputs pairs of similar subsequences from stream time series, strongly requires low memory consumption, low processing cost, and query procedures that are themselves adaptive to time-varying stream data. These requirements invalidate the existing approaches in static databases. In this paper, we propose an efficient and effective approach to perform SJ among multiple stream time series incrementally. In particular, we present a novel method, Adaptive Radius-based Search (ARES), which can answer the similarity search without false dismissals and is seamlessly integrated into SJ processing. Most importantly, we provide a formal cost model for ARES, based on which ARES can be adaptive to data characteristics, achieving the minimum number of refined candidate pairs, and thus, suitable for stream processing. Furthermore, in light of the cost model, we utilize space-efficient synopses that are constructed for stream time series to further reduce the candidate set. Extensive experiments demonstrate the efficiency and effectiveness of our proposed approach.", "Similarity join processing in the streaming environment has many practical applications such as sensor networks, object tracking and monitoring, and so on. Previous works usually assume that stream processing is conducted over precise data. In this paper, we study an important problem of similarity join processing on stream data that inherently contain uncertainty (or called uncertain data streams), where the incoming data at each time stamp are uncertain and imprecise. Specifically, we formalize this problem as join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data. To tackle the challenges with respect to efficiency and effectiveness such as limited memory and small response time, we propose effective pruning methods on both object and sample levels to filter out false alarms. We integrate the proposed pruning methods into an efficient query procedure that can incrementally maintain the USJ answers. Most importantly, we further design a novel strategy, namely, adaptive superset prejoin (ASP), to maintain a superset of USJ candidate pairs. ASP is in light of our proposed formal cost model such that the average USJ processing cost is minimized. We have conducted extensive experiments to demonstrate the efficiency and effectiveness of our proposed approaches.", "There has been an increased growth in a number of applications that naturally generate large volumes of uncertain data. By the advent of such applications, the support of advanced analysis queries such as the skyline and its variant operators for big uncertain data has become important. In this paper, we propose the effective parallel algorithms using MapReduce to process the probabilistic skyline queries for uncertain data modeled by both discrete and continuous models. We present three filtering methods to identify probabilistic non-skyline objects in advance. We next develop a single MapReduce phase algorithm PS-QP-MR by utilizing space partitioning based on a variant of quadtrees to distribute the instances of objects effectively and the enhanced algorithm PS-QPF-MR by applying the three filtering methods additionally. We also propose the workload balancing technique to balance the workload of reduce functions based on the number of machines available. Finally, we present the brute-force algorithms PS-BR-MR and PS-BRF-MR with partitioning randomly and applying the filtering methods. In our experiments, we demonstrate the efficiency and scalability of PS-QPF-MR compared to the other algorithms.", "Probabilistic data have recently become popular in applications such as scientific and geospatial databases. For images and other spatial datasets, probabilistic values can capture the uncertainty in extent and class of the objects in the images. Relating one such dataset to another by spatial joins is an important operation for data management systems. We consider probabilistic spatial join (PSJ) queries, which rank the results according to a score that incorporates both the uncertainties associated with the objects and the distances between them. We present algorithms for two kinds of PSJ queries: Threshold PSJ queries, which return all pairs that score above a given threshold, and top-k PSJ queries, which return the k top-scoring pairs. For threshold PSJ queries, we propose a plane sweep algorithm that, because it exploits the special structure of the problem, runs in 0(n (log n + k)) time, where n is the number of points and k is the number of results. We extend the algorithms to 2-D data and to top-k PSJ queries. To further speed up top-k PSJ queries, we develop a scheduling technique that estimates the scores at the level of blocks, then hands the blocks to the plane sweep algorithm. By finding high-scoring pairs early, the scheduling allows a large portion of the datasets to be pruned. Experiments demonstrate speed-ups of two orders of magnitude." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Incomplete Databases. In the literature of incomplete databases, the most commonly used imputation methods include rule-based @cite_1 , statistical-based @cite_36 , pattern-based @cite_20 , constraint-based @cite_42 imputation, and so on. These existing works may incur the accuracy problem for sparse data sets. That is, sometimes, they may not be able to find samples to impute the missing attributes in sparse data sets, which may lead to problems such as imputation failure or even wrong imputation result @cite_18 . To avoid or alleviate this problem, in this paper, we use DDs to impute missing attributes based on a historical (complete) data repository @math . We will consider the regression-based imputation approaches (e.g., @cite_32 ) as our future work.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_42", "@cite_1", "@cite_32", "@cite_20" ], "mid": [ "1990086013", "2282784388", "1551374365", "2130836428" ], "abstract": [ "Missing values are a common problem in many real world databases. Inadequate handing of missing data can lead to serious problems in data analysis. A common way to cope with this problem is to use imputation methods to fill missing values with plausible values. This paper proposes GPMI, a multiple imputation method that uses genetic programming as a regression method to estimate missing values. Experiments on eight datasets with six levels of missing values compare GPMI with seven other popular and advanced imputation methods on two measures: the prediction accuracy and the classification accuracy. The results show that, in most cases, GPMI not only achieves better prediction accuracy, but also better classification accuracy than the other imputation methods.", "Incomplete information often occur along with many database applications, e.g., in data integration, data cleaning or data exchange. The idea of data imputation is to fill the missing data with the values of its neighbors who share the same information. Such neighbors could either be identified certainly by editing rules or statistically by relational dependency networks. Unfortunately, owing to data sparsity, the number of neighbors (identified w.r.t. value equality) is rather limited, especially in the presence of data values with variances. In this paper, we argue to extensively enrich similarity neighbors by similarity rules with tolerance to small variations. More fillings can thus be acquired that the aforesaid equality neighbors fail to reveal. To fill the missing values more, we study the problem of maximizing the missing data imputation. Our major contributions include (1) the np-hardness analysis on solving and approximating the problem, (2) exact algorithms for tackling the problem, and (3) efficient approximation with performance guarantees. Experiments on real and synthetic data sets demonstrate that the filling accuracy can be improved.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996", "Missing data analyses have received considerable recent attention in the methodological literature, and two “modern” methods, multiple imputation and maximum likelihood estimation, are recommended. The goals of this article are to (a) provide an overview of missing-data theory, maximum likelihood estimation, and multiple imputation; (b) conduct a methodological review of missing-data reporting practices in 23 applied research journals; and (c) provide a demonstration of multiple imputation and maximum likelihood estimation using the Longitudinal Study of American Youth data. The results indicated that explicit discussions of missing data increased substantially between 1999 and 2003, but the use of maximum likelihood estimation or multiple imputation was rare; the studies relied almost exclusively on listwise and pairwise deletion." ] }
1908.08118
2969661104
Neural plasticity is an important functionality of human brain, in which number of neurons and synapses can shrink or expand in response to stimuli throughout the span of life. We model this dynamic learning process as an @math -norm regularized binary optimization problem, in which each unit of a neural network (e.g., weight, neuron or channel, etc.) is attached with a stochastic binary gate, whose parameters determine the level of activity of a unit in the network. At the beginning, only a small portion of binary gates (therefore the corresponding neurons) are activated, while the remaining neurons are in a hibernation mode. As the learning proceeds, some neurons might be activated or deactivated if doing so can be justified by the cost-benefit tradeoff measured by the @math -norm regularized objective. As the training gets mature, the probability of transition between activation and deactivation will diminish until a final hardening stage. We demonstrate that all of these learning dynamics can be modulated by a single parameter @math seamlessly. Our neural plasticity network (NPN) can prune or expand a network depending on the initial capacity of network provided by the user; it also unifies dropout (when @math ), traditional training of DNNs (when @math ) and interpolates between these two. To the best of our knowledge, this is the first learning framework that unifies network sparsification and network expansion in an end-to-end training pipeline. Extensive experiments on synthetic dataset and multiple image classification benchmarks demonstrate the superior performance of NPN. We show that both network sparsification and network expansion can yield compact models of similar architectures and of similar predictive accuracies that are close to or sometimes even higher than baseline networks. We plan to release our code to facilitate the research in this area.
Another closely related area is neural architecture search @cite_24 @cite_29 @cite_6 that searches for an optimal network architecture for a given learning task. It attempts to determine number of layers, types of layers, layer configurations, different activation functions, etc. Given the extremely large search space, typically reinforcement learning algorithms are utilized for efficient implementations. Our NPN can be categorized as a subset of neural architecture search in the sense that we start with a fixed architecture and aim to determine an optimal capacity (e.g., number of weights, neurons or channels) of a network.
{ "cite_N": [ "@cite_24", "@cite_29", "@cite_6" ], "mid": [ "2773706593", "2957020430", "2888429796", "2905692112" ], "abstract": [ "Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23 test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters.", "We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the neural architecture search space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. Experiments on ImageNet show that our co-exploration NAS can find the neural architectures and associated hardware design with the same accuracy, 35.24 higher throughput, 54.05 higher energy efficiency and 136x reduced search time, compared with the state-of-the-art hardware-aware NAS.", "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods. Furthermore, the computational resource is 10 times fewer than typical methods based on RL and EA.", "This paper proposes an efficient neural network (NN) architecture design methodology called Chameleon that honors given resource constraints. Instead of developing new building blocks or using computationally-intensive reinforcement learning algorithms, our approach leverages existing efficient network building blocks and focuses on exploiting hardware traits and adapting computation resources to fit target latency and or energy constraints. We formulate platform-aware NN architecture search in an optimization framework and propose a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and or energy) predictors. At the core of our algorithm lies an accuracy predictor built atop Gaussian Process with Bayesian optimization for iterative sampling. With a one-time building cost for the predictors, our algorithm produces state-of-the-art model architectures on different platforms under given constraints in just minutes. Our results show that adapting computation resources to building blocks is critical to model performance. Without the addition of any bells and whistles, our models achieve significant accuracy improvements against state-of-the-art hand-crafted and automatically designed architectures. We achieve 73.8 and 75.3 top-1 accuracy on ImageNet at 20ms latency on a mobile CPU and DSP. At reduced latency, our models achieve up to 8.5 (4.8 ) and 6.6 (9.3 ) absolute top-1 accuracy improvements compared to MobileNetV2 and MnasNet, respectively, on a mobile CPU (DSP), and 2.7 (4.6 ) and 5.6 (2.6 ) accuracy gains over ResNet-101 and ResNet-152, respectively, on an Nvidia GPU (Intel CPU)." ] }
1908.08118
2969661104
Neural plasticity is an important functionality of human brain, in which number of neurons and synapses can shrink or expand in response to stimuli throughout the span of life. We model this dynamic learning process as an @math -norm regularized binary optimization problem, in which each unit of a neural network (e.g., weight, neuron or channel, etc.) is attached with a stochastic binary gate, whose parameters determine the level of activity of a unit in the network. At the beginning, only a small portion of binary gates (therefore the corresponding neurons) are activated, while the remaining neurons are in a hibernation mode. As the learning proceeds, some neurons might be activated or deactivated if doing so can be justified by the cost-benefit tradeoff measured by the @math -norm regularized objective. As the training gets mature, the probability of transition between activation and deactivation will diminish until a final hardening stage. We demonstrate that all of these learning dynamics can be modulated by a single parameter @math seamlessly. Our neural plasticity network (NPN) can prune or expand a network depending on the initial capacity of network provided by the user; it also unifies dropout (when @math ), traditional training of DNNs (when @math ) and interpolates between these two. To the best of our knowledge, this is the first learning framework that unifies network sparsification and network expansion in an end-to-end training pipeline. Extensive experiments on synthetic dataset and multiple image classification benchmarks demonstrate the superior performance of NPN. We show that both network sparsification and network expansion can yield compact models of similar architectures and of similar predictive accuracies that are close to or sometimes even higher than baseline networks. We plan to release our code to facilitate the research in this area.
Compared to network sparsification, network expansion is a relatively less explored area. There are few existing works that can dynamically increase the capacity of network during training. For example, DNC @cite_16 sequentially adds neurons one at a time to the hidden layers of network until the desired approximation accuracy is achieved. @cite_17 proposes to train a denoising autoencoder (DAE) by adding in new neurons and later merging them with other neurons to prevent redundancy. For convolutional networks, @cite_18 proposes to widen or deepen a pretrained network for better knowledge transfer. Recently, a boosting-style method named AdaNet @cite_25 is used to adaptively grow the structure while learning the weights. However, all these approaches either only add neurons or add remove neurons manually. In contrast, our NPN can add or remove (deactivate) neurons during training as needed without human intervention, and is an end-to-end unified framework for network sparsification and expansion.
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2962939807", "1883420340", "2613498939", "2753410109" ], "abstract": [ "Abstract Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https: github.com AcrossV Gated-XNOR . More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms.", "Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such distortions are often imperceptible, but can result in 100 mis-classification for a state of the art DNN. We study the structure of adversarial examples and explore network topology, pre-processing and training strategies to improve the robustness of DNNs. We perform various experiments to assess the removability of adversarial examples by corrupting with additional noise and pre-processing with denoising autoencoders (DAEs). We find that DAEs can remove substantial amounts of the adversarial noise. How- ever, when stacking the DAE with the original DNN, the resulting network can again be attacked by new adversarial examples with even smaller distortion. As a solution, we propose Deep Contractive Network, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE). This increases the network robustness to adversarial examples, without a significant performance penalty.", "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs precisely preserve performance on the original domain, require a fraction (typically 13 , dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3 of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.", "Due to deep cascades of nonlinear units, deep neural networks (DNNs) can automatically learn non-local generalization priors from data and have achieved high performance in various applications. However, such properties have also opened a door for adversaries to generate the so-called adversarial examples to fool DNNs. Specifically, adversaries can inject small perturbations to the input data and therefore decrease the performance of deep neural networks significantly. Even worse, these adversarial examples have the transferability to attack a black-box model based on finite queries without knowledge of the target model. Therefore, we aim to empirically compare different defensive strategies against various adversary models and analyze the cross-model efficiency for these robust learners. We conclude that the adversarial retraining framework also has the transferability, which can defend adversarial examples without requiring prior knowledge of the adversary models. We compare the general adversarial retraining framework with the state-of-the-art robust deep neural networks, such as distillation, autoencoder stacked with classifier (AEC), and our improved version, IAEC, to evaluate their robustness as well as the vulnerability in terms of the distortion required to mislead the learner. Our experimental results show that the adversarial retraining framework can defend most of the adversarial examples notably and consistently without adding additional vulnerabilities or performance penalty to the original model." ] }
1908.08338
2969823110
Dynamic network slicing has emerged as a promising and fundamental framework for meeting 5G's diverse use cases. As machine learning (ML) is expected to play a pivotal role in the efficient control and management of these networks, in this work we examine the ML-based Quality-of-Transmission (QoT) estimation problem under the dynamic network slicing context, where each slice has to meet a different QoT requirement. We examine ML-based QoT frameworks with the aim of finding QoT model s that are fine-tuned according to the diverse QoT requirements. Centralized and distributed frameworks are examined and compared according to their accuracy and training time. We show that the distributed QoT models outperform the centralized QoT model, especially as the number of diverse QoT requirements increases.
Several ML applications have been already developed and explored for optical network planning purposes @cite_9 @cite_8 . In general, the state-of-the-art assumes that the optical network is centrally controlled by an SDN-based optical network controller @cite_11 @cite_20 , equipped with storage, processing, and monitoring capabilities (Fig. ). The main responsibility of the SDN-based controller is to efficiently manage the network resources in such a way that the diverse QoS requirements of the different use cases are met, as closely as possible, by the virtual network topology (VNT). The VNT can be viewed as a single slice (virtual network) that has to best fit all the diverse use cases.
{ "cite_N": [ "@cite_9", "@cite_11", "@cite_20", "@cite_8" ], "mid": [ "2211508709", "2763269482", "2504195863", "1971678403" ], "abstract": [ "Software-defined networking (SDN) has been proposed as a next-generation control and management framework, facilitating network programmability to address emerging dynamic application requirements. The separation of control and data planes in SDN demands the synergistic operation of the two entities for globally optimized performance. We identify the collaboration of the control plane and the data plane in software-defined optical transmission systems as a cyber-physical interdependency where the \"physical\" fiber network provides the “cyber” control network with means to distribute control and signaling messages and in turn is itself operated by these \"cyber\" control messages. We focus on the cyber-physical interdependency in SDN optical transmission from a network robustness perspective and characterize cascading failure behaviors. Our analysis suggests that topological properties pose a significant impact on failure extensibility. We further evaluate the effectiveness of optical layer reconfigurability in improving the resilience of SDN controlled optical transmission systems.", "Modern planetary-scale online services have massive data to transfer over the wide area network (WAN). Due to the tremendous cost of building WANs and the stringent timing requirement of distributed applications, it is critical for network operators to make efficient use of network resources to optimize data transfers. By leveraging software-defined networking (SDN) and reconfigurable optical devices, recent solutions design centralized systems to jointly control the network layer and the optical layer. While these solutions show it is promising to significantly reduce data transfer times by centralized cross-layer control, they do not have any theoretical guarantees on the proposed algorithms. This paper presents approximation algorithms and theoretical analysis for the online transfer scheduling problem over optical WANs. The goal of the scheduling problem is to minimize the makespan (the time to finish all transfers) or the total sum of completion times. We design and analyze various greedy, online scheduling algorithms that can achieve 3-competitive ratio for makespan, 2-competitive ratio for minimum sum completion time for jobs of unit size, and 3α-competitive ratio for jobs of arbitrary transfer size and each node having degree constraint d, where α = 1 when d = 1 and α = 1.86 when d ≥ 2. We also evaluated the performance of these algorithms and compared the performance with prior heuristics.", "Bulk transfer on the wide-area network (WAN) is a fundamental service to many globally-distributed applications. It is challenging to efficiently utilize expensive WAN bandwidth to achieve short transfer completion time and meet mission-critical deadlines. Advancements in software-defined networking (SDN) and optical hardware make it feasible and beneficial to quickly reconfigure optical devices in the optical layer, which brings a new opportunity for traffic management on the WAN. We present Owan, a novel traffic management system that optimizes wide-area bulk transfers with centralized joint control of the optical and network layers. can dynamically change the network-layer topology by reconfiguring the optical devices. We develop efficient algorithms to jointly optimize optical circuit setup, routing and rate allocation, and dynamically adapt them to traffic demand changes. We have built a prototype of Owan with commodity optical and electrical hardware. Testbed experiments and large-scale simulations on two ISP topologies and one inter-DC topology show that completes transfers up to 4.45x faster on average, and up to 1.36x more transfers meet their deadlines, as compared to prior methods that only control the network layer.", "This paper presents new Software Defined Networking (SDN) control framework for Quality of Service (QoS) provisioning. The proposed SDN controller automatically and flexibly programs network devices to provide required QoS level for multimedia applications. Centralized control monitors state of the network resources and performs smart traffic management according to collected information. Beside QoS provisioning for priority flows, the proposed solution aims at minimizing degradation of best-effort traffic. The experimental results show significant performance improvement under high traffic load compared to traditional best-effort and IntServ service models." ] }
1908.08289
2969676305
Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: this https URL.
The inherent depth ambiguity in 3d pose estimation from monocular images limits the estimation accuracy. Extensive research has been done to exploit extra information contained in temporal sequences. Zhou al @cite_35 formulate an optimization problem to search for the 3d configuration with the highest probability given 2d confidence maps and solve the problem using Expectation-Maximization. Tekin al @cite_5 use a CNN to align bounding box of consecutive frames and then generate a spatial-temporal volume based on which they extract 3d HOG features and regress the 3d pose for the central frame. Mehta al @cite_4 propose a real-time system for 3d pose estimation and apply temporal filtering to yield temporally consistent 3d poses.
{ "cite_N": [ "@cite_35", "@cite_5", "@cite_4" ], "mid": [ "2963688992", "2785641712", "2285449971", "2769237672" ], "abstract": [ "This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables to take into account considerable uncertainties in 2D joint locations. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.", "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.", "This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails." ] }
1908.08289
2969676305
Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: this https URL.
Recently, RNN-based frameworks are used to deal with sequential input data. Lin al @cite_1 use a multi-stage framework based on Long Short-term Memory (LSTM) units to estimate the 3d pose from the extracted 2d features and estimated 3d pose in the previous stage. Coskun al @cite_32 propose to learn a human motion model using Kalman Filter and implement it with LSTMs. Hossain al @cite_36 design a sequence-to-sequence network with LSTM units to first encode a sequence of motions in the form of 2d joint locations and then decode the 3d poses of the sequence. However, RNNs are sensitive to erroneous inputs and tend to drift over long sequences. To overcome the shortcomings of RNNs, a CNN-based framework is proposed by Pavllo al @cite_26 to aggregate temporal information using dilated convolutions. Despite being successful at regressing a single frame from a sequence of input, it cannot concurrently output the 3d pose estimations for all frames in the sequence.
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_1", "@cite_32" ], "mid": [ "2963447094", "2962849564", "2746131160", "2769237672" ], "abstract": [ "Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities.", "We observed that recent state-of-the-art results on single image human pose estimation were achieved by multistage Convolution Neural Networks (CNN). Notwithstanding the superior performance on static images, the application of these models on videos is not only computationally intensive, it also suffers from performance degeneration and flicking. Such suboptimal results are mainly attributed to the inability of imposing sequential geometric consistency, handling severe image quality degradation (e.g. motion blur and occlusion) as well as the inability of capturing the temporal correlation among video frames. In this paper, we proposed a novel recurrent network to tackle these problems. We showed that if we were to impose the weight sharing scheme to the multi-stage CNN, it could be re-written as a Recurrent Neural Network (RNN). This property decouples the relationship among multiple network stages and results in significantly faster speed in invoking the network for videos. It also enables the adoption of Long Short-Term Memory (LSTM) units between video frames. We found such memory augmented RNN is very effective in imposing geometric consistency among frames. It also well handles input quality degradation in videos while successfully stabilizes the sequential outputs. The experiments showed that our approach significantly outperformed current state-of-the-art methods on two large-scale video pose estimation benchmarks. We also explored the memory cells inside the LSTM and provided insights on why such mechanism would benefit the prediction for video-based pose estimations.1", "Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails." ] }
1908.08289
2969676305
Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: this https URL.
Inspired by matrix factorization methods commonly used in Structure-from-Motion (SfM) @cite_23 and non-rigid SfM @cite_33 , several works @cite_3 @cite_45 @cite_21 on 3d human pose estimation factorize the sequence of 3d human poses into a linear combination of shape bases. Akhter al @cite_42 suggest a duality of the factorization in the trajectory space. We extend the idea of matrix factorization to learning a deep network that estimates the coefficients of the trajectory bases from a sequence of 2d poses as inputs. The 3d poses of all frames are recovered concurrently as the linear combinations of the trajectory bases with the estimated coefficients.
{ "cite_N": [ "@cite_33", "@cite_21", "@cite_42", "@cite_3", "@cite_45", "@cite_23" ], "mid": [ "2792747672", "2769237672", "2557698284", "2131417778" ], "abstract": [ "We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D poses of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our Localization-Classification-Regression architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests candidate poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our method recovers full-body 2D and 3D poses, hallucinating plausible body parts when the persons are partially occluded or truncated by the image boundary. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.", "We introduce a framework for unconstrained 3D human upper body pose estimation from multiple camera views in complex environment. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model texture adaptation. Single-frame pose recovery consists of a hypothesis generation stage, in which candidate 3D poses are generated, based on probabilistic hierarchical shape matching in each camera view. In the subsequent hypothesis verification stage, the candidate 3D poses are re-projected into the other camera views and ranked according to a multi-view likelihood measure. Temporal integration consists of computing K-best trajectories combining a motion model and observations in a Viterbi-style maximum-likelihood approach. Poses that lie on the best trajectories are used to generate and adapt a texture model, which in turn enriches the shape likelihood measure used for pose recovery. The multiple trajectory hypotheses are used to generate pose predictions, augmenting the 3D pose candidates generated at the next time step. We demonstrate that our approach outperforms the state-of-the-art in experiments with large and challenging real-world data from an outdoor setting." ] }
1908.08015
2969678456
Among various optimization algorithms, ADAM can achieve outstanding performance and has been widely used in model learning. ADAM has the advantages of fast convergence with both momentum and adaptive learning rate. For deep neural network learning problems, since their objective functions are nonconvex, ADAM can also get stuck in local optima easily. To resolve such a problem, the genetic evolutionary ADAM (GADAM) algorithm, which combines the ADAM and genetic algorithm, was introduced in recent years. To further maximize the advantages of the GADAM model, we propose to implement the boosting strategy for unit model training in GADAM. In this paper, we introduce a novel optimization algorithm, namely Boosting based GADAM (BGADAM). We will show that after adding the boosting strategy to the GADAM model, it can help unit models jump out the local optima and converge to better solutions.
: Deep learning models have achieved great success in recent years, whose representative examples include convolutional neural network(CNN) @cite_12 @cite_1 . CNN has been mainly used to deal with image data and shows outstanding performance on various computer vision tasks. Besides CNN, there also exist many other types of deep learning models, e.g., recurrent neural net @cite_10 @cite_9 , deep autoencoder @cite_21 , deep boltzmann machine @cite_13 , and GAN @cite_5 , etc.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_21", "@cite_1", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2747898905", "1929903369", "2953264111", "2606006859" ], "abstract": [ "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.", "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labeled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pretrained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labeled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modeling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models." ] }
1908.08015
2969678456
Among various optimization algorithms, ADAM can achieve outstanding performance and has been widely used in model learning. ADAM has the advantages of fast convergence with both momentum and adaptive learning rate. For deep neural network learning problems, since their objective functions are nonconvex, ADAM can also get stuck in local optima easily. To resolve such a problem, the genetic evolutionary ADAM (GADAM) algorithm, which combines the ADAM and genetic algorithm, was introduced in recent years. To further maximize the advantages of the GADAM model, we propose to implement the boosting strategy for unit model training in GADAM. In this paper, we introduce a novel optimization algorithm, namely Boosting based GADAM (BGADAM). We will show that after adding the boosting strategy to the GADAM model, it can help unit models jump out the local optima and converge to better solutions.
: Boosting was proposed by @cite_8 . Given a training dataset containing @math training examples, a batch of @math training examples is generated by random sampling with replacement. We can generate @math training sets from the same original training data by applying the sampling @math times @cite_20 , which will be used to train @math different base models respectively in boosting.
{ "cite_N": [ "@cite_20", "@cite_8" ], "mid": [ "2024046085", "2103504567", "1975846642", "1605688901" ], "abstract": [ "Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.", "Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of \"negative examples\" generated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a feature is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.", "One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.", "Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions made by the base algorithm. This general approach has been studied previously by Ali and Pazzani and by Dietterich and Kong. This paper compares the effectiveness of randomization, bagging, and boosting for improving the performance of the decision-tree algorithm C4.5. The experiments show that in situations with little or no classification noise, randomization is competitive with (and perhaps slightly superior to) bagging but not as accurate as boosting. In situations with substantial classification noise, bagging is much better than boosting, and sometimes better than randomization." ] }
1908.07836
2969860478
Recognizing the layout of unstructured digital documents is an important step when parsing the documents into structured machine-readable format for downstream applications. Deep neural networks that are developed for computer vision have been proven to be an effective method to analyze layout of document images. However, document layout datasets that are currently publicly available are several magnitudes smaller than established computing vision datasets. Models have to be trained by transfer learning from a base model that is pre-trained on a traditional computer vision dataset. In this paper, we develop the PubLayNet dataset for document layout analysis by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. The size of the dataset is comparable to established computer vision datasets, containing over 360 thousand document images, where typical document layout elements are annotated. The experiments demonstrate that deep neural networks trained on PubLayNet accurately recognize the layout of scientific articles. The pre-trained models are also a more effective base mode for transfer learning on a different document domain. We release the dataset (https: github.com ibm-aur-nlp PubLayNet) to support development and evaluation of more advanced models for document layout analysis.
Existing datasets for document layout analysis rely on manual annotation. Some of these datasets are used in document processing challenges. Examples of these efforts are available in several ICDAR challenges @cite_19 , which cover as well complex layouts @cite_18 @cite_7 . The US NIH National Library of Medicine has provided the Medical Article Records Groundtruth (MARG) https: ceb.nlm.nih.gov inactive-communications-engineering-branch-projects medical-article-records-groundtruth-marg , which are obtained from scanned article pages.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_7" ], "mid": [ "2114388055", "2151765755", "1585998064", "2097553584" ], "abstract": [ "Objectives: We examine recent published research on the extraction of information from textual documents in the Electronic Health Record (EHR). Methods: Literature review of the research published after 1995, based on PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers already included. Results: 174 publications were selected and are discussed in this review in terms of methods used, pre-processing of textual documents, contextual features detection and analysis, extraction of information in general, extraction of codes and of information for decision-support and enrichment of the EHR, information extraction for surveillance, research, automated terminology management, and data mining, and de-identification of clinical text. Conclusions: Performance of information extraction systems with clinical text has improved since the last systematic review in 1995, but they are still rarely applied outside of the laboratory they have been developed in. Competitive challenges for information extraction from clinical text, along with the availability of annotated clinical text corpora, and further improvements in system performance are important factors to stimulate advances in this field and to increase the acceptance and usage of these systems in concrete clinical and biomedical research contexts. Cli", "There is a significant need for a realistic dataset on which to evaluate layout analysis methods and examine their performance in detail. This paper presents a new dataset (and the methodology used to create it) based on a wide range of contemporary documents. Strong emphasis is placed on comprehensive and detailed representation of both complex and simple layouts, and on colour originals. In-depth information is recorded both at the page and region level. Ground truth is efficiently created using a new semi-automated tool and stored in a new comprehensive XML representation, the PAGE format. The dataset can be browsed and searched via a web-based front end to the underlying database and suitable subsets (relevant to specific evaluation goals) can be selected and downloaded.", "In document image understanding, public datasets with ground-truth are an important part of scientific work. They are not only helpful for developing new methods, but also provide a way of comparing performance. Generating these datasets, however, is time consuming and cost-intensive work, requiring a lot of manual effort. In this paper we both propose a way to semi-automatically generate ground-truthed datasets for newspapers and provide a comprehensive dataset. The focus of this paper is layout analysis ground truth. The proposed two step approach consists of a module which automatically creates layouts and an image matching module which allows to map the ground truth information from the synthetic layout to the scanned version. In the first step, layouts are generated automatically from a news corpus. The output consists of a digital newspaper (PDF file) and an XML file containing geometric and logical layout information. In the second step, the PDF files are printed, scanned and aligned with the synthetic image obtained by rendering the PDF. Finally, the geometric and logical layout ground truth is mapped onto the scanned image.", "The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE^(R), the largest bibliographic database of biomedical citations. Indexers at the US National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH^(R) terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM's Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading subheading pair recommendations were assessed independently and combined. The best combination achieves 48 precision and 30 recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice." ] }
1908.07836
2969860478
Recognizing the layout of unstructured digital documents is an important step when parsing the documents into structured machine-readable format for downstream applications. Deep neural networks that are developed for computer vision have been proven to be an effective method to analyze layout of document images. However, document layout datasets that are currently publicly available are several magnitudes smaller than established computing vision datasets. Models have to be trained by transfer learning from a base model that is pre-trained on a traditional computer vision dataset. In this paper, we develop the PubLayNet dataset for document layout analysis by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. The size of the dataset is comparable to established computer vision datasets, containing over 360 thousand document images, where typical document layout elements are annotated. The experiments demonstrate that deep neural networks trained on PubLayNet accurately recognize the layout of scientific articles. The pre-trained models are also a more effective base mode for transfer learning on a different document domain. We release the dataset (https: github.com ibm-aur-nlp PubLayNet) to support development and evaluation of more advanced models for document layout analysis.
In addition to document layout, further understanding of the document content has been studied in the evaluation of table detection methods, e.g. @cite_13 @cite_21 . Examples include table detection from document images using heuristics @cite_11 , vertical arrangement of text blocks @cite_3 and deep learning methods @cite_9 @cite_2 @cite_0 @cite_12 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2321821989", "2787523828", "2150673968", "2444353601" ], "abstract": [ "Table detection is a challenging problem and plays an important role in document layout analysis. In this paper, we propose an effective method to identify the table region from document images. First, the regions of interest (ROIs) are recognized as the table candidates. In each ROI, we locate text components and extract text blocks. After that, we check all text blocks to determine if they are arranged horizontally or vertically and compare the height of each text block with the average height. If the text blocks satisfy a series of rules, the ROI is regarded as a table. Experiments on the ICDAR 2013 dataset show that the results obtained are very encouraging. This proves the effectiveness and superiority of our proposed method.", "Table detection is a crucial step in many document analysis applications as tables are used for presenting essential information to the reader in a structured manner. It is a hard problem due to varying layouts and encodings of the tables. Researchers have proposed numerous techniques for table detection based on layout analysis of documents. Most of these techniques fail to generalize because they rely on hand engineered features which are not robust to layout variations. In this paper, we have presented a deep learning based method for table detection. In the proposed method, document images are first pre-processed. These images are then fed to a Region Proposal Network followed by a fully connected neural network for table detection. The proposed method works with high precision on document images with varying layouts that include documents, research papers, and magazines. We have done our evaluations on publicly available UNLV dataset where it beats Tesseract's state of the art table detection system by a significant margin.", "Table detection is an important task in the field of document analysis. It has been extensively studied since a couple of decades. Various kinds of document mediums are involved, from scanned images to web pages, from plain texts to PDF files. Numerous algorithms published bring up a challenging issue: how to evaluate algorithms in different context. Currently, most work on table detection conducts experiments on their in-house dataset. Even the few sources of online datasets are targeted at image documents only. Moreover, Precision and recall measurement are usual practice in order to account performance based on human evaluation. In this paper, we provide a dataset that is representative, large and most importantly, publicly available. The compatible format of the ground truth makes evaluation independent of document medium. We also propose a set of new measures, implement them, and open the source code. Finally, three existing table detection algorithms are evaluated to demonstrate the reliability of the dataset and metrics.", "Because of the better performance of deep learning on many computer vision tasks, researchers in the area of document analysis and recognition begin to adopt this technique into their work. In this paper, we propose a novel method for table detection in PDF documents based on convolutional neutral networks, one of the most popular deep learning models. In the proposed method, some table-like areas are selected first by some loose rules, and then the convolutional networks are built and refined to determine whether the selected areas are tables or not. Besides, the visual features of table areas are directly extracted and utilized through the convolutional networks, while the non-visual information (e.g. characters, rendering instructions) contained in original PDF documents is also taken into consideration to help achieve better recognition results. The primary experimental results show that the approach is effective in table detection." ] }
1908.07801
2969390690
Instance segmentation requires a large number of training samples to achieve satisfactory performance and benefits from proper data augmentation. To enlarge the training set and increase the diversity, previous methods have investigated using data annotation from other domain (e.g. bbox, point) in a weakly supervised mechanism. In this paper, we present a simple, efficient and effective method to augment the training set using the existing instance mask annotations. Exploiting the pixel redundancy of the background, we are able to improve the performance of Mask R-CNN for 1.7 mAP on COCO dataset and 3.3 mAP on Pascal VOC dataset by simply introducing random jittering to objects. Furthermore, we propose a location probability map based approach to explore the feasible locations that objects can be placed based on local appearance similarity. With the guidance of such map, we boost the performance of R101-Mask R-CNN on instance segmentation from 35.7 mAP to 37.9 mAP without modifying the backbone or network structure. Our method is simple to implement and does not increase the computational complexity. It can be integrated into the training pipeline of any instance segmentation model without affecting the training and inference efficiency. Our code and models have been released at this https URL
Combining instance detection and semantic segmentation, instance segmentation @cite_45 @cite_19 @cite_4 @cite_29 @cite_39 @cite_5 @cite_11 @cite_27 is a much harder problem. Earlier methods either propose segmentation candidates followed by classification @cite_40 , or associate pixels on the semantic segmentation map into different instances @cite_6 . Recently, FCIS @cite_10 proposed the first fully convolutional end-to-end solution to instance segmentation, which predicted position-sensitive channels @cite_41 for instance segmentation. This idea is further developed by @cite_20 which outperforms competing methods on the COCO dataset @cite_9 . With the help of FPN @cite_24 and a precise pooling scheme named , He al @cite_33 proposed a two-step model Mask R-CNN that extends Faster R-CNN framework with a mask head and achieves state-of-the-art on instance segmentation @cite_42 and pose estimation @cite_12 tasks. Although these methods have reached impressive performance on public datasets, those heavy deep models are hungry for an extremely large number of training data, which is usually not available in real-world applications. Furthermore, the potential of large datasets are not fully exploited by existing training methods.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_41", "@cite_29", "@cite_9", "@cite_42", "@cite_6", "@cite_39", "@cite_24", "@cite_19", "@cite_27", "@cite_45", "@cite_40", "@cite_5", "@cite_10", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2317851288", "2951120635", "2949259132", "2963727650" ], "abstract": [ "Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO.", "Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO.", "We present the first fully convolutional end-to-end solution for instance-aware semantic segmentation task. It inherits all the merits of FCNs for semantic segmentation and instance mask proposal. It performs instance mask prediction and classification jointly. The underlying convolutional representation is fully shared between the two sub-tasks, as well as between all regions of interest. The proposed network is highly integrated and achieves state-of-the-art performance in both accuracy and efficiency. It wins the COCO 2016 segmentation competition by a large margin. Code would be released at this https URL .", "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7 mIoU on PASCAL-Context, 85.9 mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45 , which is comparable with state-of-the-art approaches with over 10A— more layers. The source code for the complete system are publicly available1." ] }
1908.07820
2969722339
Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks. It has shown great success in natural language processing (NLP). Currently, a number of MLT architectures and learning mechanisms have been proposed for various NLP tasks. However, there is no systematic exploration and comparison of different MLT architectures and learning mechanisms for their strong performance in-depth. In this paper, we conduct a thorough examination of typical MTL methods on a broad range of representative NLP tasks. Our primary goal is to understand the merits and demerits of existing MTL methods in NLP tasks, thus devising new hybrid architectures intended to combine their strengths.
Multi-task learning with deep neural networks has gained increasing attention within NLP community over the past decades. @cite_3 and @cite_21 described most of the existing techniques for multi-task learning in deep neural networks. Generally, existing MTL methods can be categorised as @cite_29 @cite_10 and @cite_20 @cite_32 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_32", "@cite_3", "@cite_10", "@cite_20" ], "mid": [ "2951720331", "2624871570", "2900964459", "2966182616" ], "abstract": [ "Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.", "Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL." ] }
1908.07752
2969591045
Clinicians and other analysts working with healthcare data are in need for better support to cope with large and complex data. While an increasing number of visual analytics environments integrates explicit domain knowledge as a means to deliver a precise representation of the available data, theoretical work so far has focused on the role of knowledge in the visual analytics process. There has been little discussion about how such explicit domain knowledge can be structured in a generalized framework. This paper collects desiderata for such a structural framework, proposes how to address these desiderata based on the model of linked data, and demonstrates the applicability in a visual analytics environment for physiotherapy.
Since Illuminating the Path'' [p. ,35] thomas_2005_illuminating , incorporating prior domain knowledge and build[ing] knowledge structures'' has been on VA's agenda. This is underscored by the pivotal position of knowledge in the VA process model by @cite_40 @cite_36 and further process models such as the knowledge generation model by @cite_47 and the visualization model by van Wijk @cite_50 . However, these process models do not differentiate between knowledge in the human space and in the machine space. Based on @cite_17 , Federico, @cite_4 delineate tacit knowledge that is exclusively available to human reasoning, from explicit knowledge that can be leveraged by the VA environment. How explicit knowledge is integrated into the VA process is formalized in several recent models by @cite_17 , @cite_20 , and Federico, @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_36", "@cite_40", "@cite_50", "@cite_47", "@cite_20", "@cite_17" ], "mid": [ "2905763413", "1594602328", "2162096170", "2785325870" ], "abstract": [ "Visual Analytics (VA) aims to combine the strengths of humans and computers for effective data analysis. In this endeavor, humans’ tacit knowledge from prior experience is an important asset that can be leveraged by both human and computer to improve the analytic process. While VA environments are starting to include features to formalize, store, and utilize such knowledge, the mechanisms and degree in which these environments integrate explicit knowledge varies widely. Additionally, this important class of VA environments has never been elaborated on by existing work on VA theory. This paper proposes a conceptual model of Knowledge-assisted VA conceptually grounded on the visualization model by van Wijk. We apply the model to describe various examples of knowledge-assisted VA from the literature and elaborate on three of them in finer detail. Moreover, we illustrate the utilization of the model to compare different design alternatives and to evaluate existing approaches with respect to their use of knowledge. Finally, the model can inspire designers to generate novel VA environments using explicit knowledge effectively.", "The primary goal of Visual Analytics (VA) is the close intertwinedness of human reasoning and automated methods. An important task for this goal is formulating a description for such a VA process. We propose the design of a VA process description that uses the inherent structure contained in time-oriented data as a way to improve the integration of human reasoning. This structure can, for example, be seen in the calendar aspect of time being composed of smaller granularities, like years and seasons. Domain experts strongly consider this structure in their reasoning, so VA needs to consider it, too.", "We introduce a novel approach to incorporating domain knowledge into Support Vector Machines to improve their example efficiency. Domain knowledge is used in an Explanation Based Learning fashion to build justifications or explanations for why the training examples are assigned their given class labels. Explanations bias the large margin classifier through the interaction of training examples and domain knowledge. We develop a new learning algorithm for this Explanation-Augmented SVM (EA-SVM). It naturally extends to imperfect knowledge, a stumbling block to conventional EBL. Experimental results confirm desirable properties predicted by the analysis and demonstrate the approach on three domains.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ." ] }
1908.07752
2969591045
Clinicians and other analysts working with healthcare data are in need for better support to cope with large and complex data. While an increasing number of visual analytics environments integrates explicit domain knowledge as a means to deliver a precise representation of the available data, theoretical work so far has focused on the role of knowledge in the visual analytics process. There has been little discussion about how such explicit domain knowledge can be structured in a generalized framework. This paper collects desiderata for such a structural framework, proposes how to address these desiderata based on the model of linked data, and demonstrates the applicability in a visual analytics environment for physiotherapy.
Beyond the role of knowledge in the VA process, only few works discuss the content and structure of explicit knowledge on a general level. @cite_7 conceptualize domain knowledge as a model of a part of reality and provide definitions for different types of models but they do not specify the form and medium how the model is represented. @cite_33 formalize data descriptors that include domain knowledge about data. Tominski @cite_26 captures domain knowledge as event types that are specified using predicate logic. Lammarsch at al. @cite_43 propose a data structure for knowledge about temporal patterns leveraging the structure of time. The generation of adapted visualizations which are based on ontological datasets and the specification of ontological mappings are treated by @cite_16 . Therefore, they use the COGZ tool, converting ontological mappings in software transformation rules so that it describes a model which fits the visualization. A similar approach for adapted visualizations is also followed by @cite_53 , describing a general system pipeline which combines ontology mapping and probabilistic reasoning techniques. Thereby, they describe the automated generation of visualizations of domain-specific data from the web. However, none of these approaches aim for a general framework.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_7", "@cite_53", "@cite_43", "@cite_16" ], "mid": [ "2042724135", "2799122863", "2963481481", "2798381792" ], "abstract": [ "In this paper, we propose a novel approach for automatic generation of visualizations from domain-specific data available on the web. We describe a general system pipeline that combines ontology mapping and probabilistic reasoning techniques. With this approach, a web page is first mapped to a Domain Ontology, which stores the semantics of a specific subject domain (e.g., music charts). The Domain Ontology is then mapped to one or more Visual Representation Ontologies, each of which captures the semantics of a visualization style (e.g., tree maps). To enable the mapping between these two ontologies, we establish a Semantic Bridging Ontology, which specifies the appropriateness of each semantic bridge. Finally each Visual Representation Ontology is mapped to a visualization using an external visualization toolkit. Using this approach, we have developed a prototype software tool, SemViz, as a realisation of this approach. By interfacing its Visual Representation Ontologies with public domain software such as ILOG Discovery and Prefuse, SemViz is able to generate appropriate visualizations automatically from a large collection of popular web pages for music charts without prior knowledge of these web pages.", "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5 on BDDS (drive-cam videos) in an unsupervised setting.", "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5 on BDDS (drive-cam videos) in an unsupervised setting.", "Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification challenge. Central to the success of our approach is a training scheme that uses higher image resolution and deals with the long-tailed distribution of training data. Next, we study transfer learning via fine-tuning from large scale datasets to small scale, domain-specific FGVC datasets. We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Our proposed transfer learning outperforms ImageNet pre-training and obtains state-of-the-art results on multiple commonly used FGVC datasets." ] }
1908.07752
2969591045
Clinicians and other analysts working with healthcare data are in need for better support to cope with large and complex data. While an increasing number of visual analytics environments integrates explicit domain knowledge as a means to deliver a precise representation of the available data, theoretical work so far has focused on the role of knowledge in the visual analytics process. There has been little discussion about how such explicit domain knowledge can be structured in a generalized framework. This paper collects desiderata for such a structural framework, proposes how to address these desiderata based on the model of linked data, and demonstrates the applicability in a visual analytics environment for physiotherapy.
, it can be seen, that most of the discussed approaches cover how explicit domain knowledge can be exploited to enhance visual representation and data analysis; some approaches provide methods to generate explicit knowledge. Additionally, most of the currently implemented knowledge-assisted VA environments are focused on the integration of specific domain knowledge, which could only be used for precisely defined analysis tasks. In general, explicit knowledge is now a first-class artifact in the VA process but its form and structure are left unspecified. None of the presented approaches provides a structural framework for describing and storing explicit knowledge in VA environments. Thus, a structural framework is needed and combined with the theoretical process model by Federico, @cite_4 it would provide valuable generative guidelines for the development of novel knowledge-assisted VA environments.
{ "cite_N": [ "@cite_4" ], "mid": [ "2905763413", "1594602328", "2488113179", "2162096170" ], "abstract": [ "Visual Analytics (VA) aims to combine the strengths of humans and computers for effective data analysis. In this endeavor, humans’ tacit knowledge from prior experience is an important asset that can be leveraged by both human and computer to improve the analytic process. While VA environments are starting to include features to formalize, store, and utilize such knowledge, the mechanisms and degree in which these environments integrate explicit knowledge varies widely. Additionally, this important class of VA environments has never been elaborated on by existing work on VA theory. This paper proposes a conceptual model of Knowledge-assisted VA conceptually grounded on the visualization model by van Wijk. We apply the model to describe various examples of knowledge-assisted VA from the literature and elaborate on three of them in finer detail. Moreover, we illustrate the utilization of the model to compare different design alternatives and to evaluate existing approaches with respect to their use of knowledge. Finally, the model can inspire designers to generate novel VA environments using explicit knowledge effectively.", "The primary goal of Visual Analytics (VA) is the close intertwinedness of human reasoning and automated methods. An important task for this goal is formulating a description for such a VA process. We propose the design of a VA process description that uses the inherent structure contained in time-oriented data as a way to improve the integration of human reasoning. This structure can, for example, be seen in the calendar aspect of time being composed of smaller granularities, like years and seasons. Domain experts strongly consider this structure in their reasoning, so VA needs to consider it, too.", "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.", "We introduce a novel approach to incorporating domain knowledge into Support Vector Machines to improve their example efficiency. Domain knowledge is used in an Explanation Based Learning fashion to build justifications or explanations for why the training examples are assigned their given class labels. Explanations bias the large margin classifier through the interaction of training examples and domain knowledge. We develop a new learning algorithm for this Explanation-Augmented SVM (EA-SVM). It naturally extends to imperfect knowledge, a stumbling block to conventional EBL. Experimental results confirm desirable properties predicted by the analysis and demonstrate the approach on three domains." ] }
1908.07516
2969539381
Neural network image reconstruction directly from measurement data is a growing field of research, but until now has been limited to producing small (e.g. 128x128) 2D images by the large memory requirements of the previously suggested networks. In order to facilitate further research with direct reconstruction, we developed a more efficient network capable of 3D reconstruction of Radon encoded data with a relatively large image matrix (e.g. 400x400). Our proposed network is able to produce image quality comparable to the benchmark Ordered Subsets Expectation Maximization (OSEM) algorithm. We address the most memory intensive aspect of transforming the data from sinogram space to image space through a specially designed Radon inversion layer. We insert this layer between an initial network segment designed to encode the sinogram input and an output segment designed to refine and scale the initial image estimate to produce the final image. We demonstrate 3D reconstructions comparable to OSEM for 1, 4, 8 and 16 slices with no modifications to the network's architecture, capacity or hyper-parameters on a data set of simulated PET whole-body scans. When batch operations are considered, this network can reconstruct an entire PET whole-body volume in a single pass or about one second. Although results in this paper are on PET data, the proposed methods would be equally applicable to X-ray CT or any other Radon encoded measurement data.
The terms deep learning and image reconstruction are often used in conjunction to describe a significant amount of recent research @cite_24 that most often falls into one of two categories: 1) combine deep learning with an analytical or statistical method such as using a deep learning prior @cite_13 or regularization term @cite_16 , or 2) use neural networks as a nonlinear filter for denoising @cite_3 @cite_13 , artifact mitigation @cite_22 and other post reconstruction tasks. Neural network image formation directly from measurement data by contrast is significantly less common.
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_24", "@cite_16", "@cite_13" ], "mid": [ "2740494144", "2949634581", "2300779272", "2963229033" ], "abstract": [ "In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a \"residual formatting layer\" to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.", "Deep learning networks have shown state-of-the-art performance in many image reconstruction problems. However, it is not well understood what properties of representation and learning may improve the generalization ability of the network. In this paper, we propose that the generalization ability of an encoder-decoder network for inverse reconstruction can be improved in two means. First, drawing from analytical learning theory, we theoretically show that a stochastic latent space will improve the ability of a network to generalize to test data outside the training distribution. Second, following the information bottleneck principle, we show that a latent representation minimally informative of the input data will help a network generalize to unseen input variations that are irrelevant to the output reconstruction. Therefore, we present a sequence image reconstruction network optimized by a variational approximation of the information bottleneck principle with stochastic latent space. In the application setting of reconstructing the sequence of cardiac transmembrane potential from body-surface potential, we assess the two types of generalization abilities of the presented network against its deterministic counterpart. The results demonstrate that the generalization ability of an inverse reconstruction network can be improved by stochasticity as well as the information bottleneck." ] }
1908.07516
2969539381
Neural network image reconstruction directly from measurement data is a growing field of research, but until now has been limited to producing small (e.g. 128x128) 2D images by the large memory requirements of the previously suggested networks. In order to facilitate further research with direct reconstruction, we developed a more efficient network capable of 3D reconstruction of Radon encoded data with a relatively large image matrix (e.g. 400x400). Our proposed network is able to produce image quality comparable to the benchmark Ordered Subsets Expectation Maximization (OSEM) algorithm. We address the most memory intensive aspect of transforming the data from sinogram space to image space through a specially designed Radon inversion layer. We insert this layer between an initial network segment designed to encode the sinogram input and an output segment designed to refine and scale the initial image estimate to produce the final image. We demonstrate 3D reconstructions comparable to OSEM for 1, 4, 8 and 16 slices with no modifications to the network's architecture, capacity or hyper-parameters on a data set of simulated PET whole-body scans. When batch operations are considered, this network can reconstruct an entire PET whole-body volume in a single pass or about one second. Although results in this paper are on PET data, the proposed methods would be equally applicable to X-ray CT or any other Radon encoded measurement data.
Early research in this area was based on networks of fully connected multilayer perceptrons @cite_11 @cite_12 @cite_17 @cite_14 that yielded promising results, but only for very low resolution reconstructions. More recent efforts have capitalized on the growth of computational resources, especially in the area of GPUs, and developed deep neural networks capable of direct reconstruction. The AUTOMAP network @cite_15 is one recent example that utilizes multiple fully connected layers followed by a sparse encoder-decoder to learn a mapping manifold from measurement space to image space. This network is capable of learning a general solution to the reconstruction inverse problem, but this generality causes inefficiency requiring a high number of parameters and limiting the application to small 2D images (128x128). DeepPET @cite_27 is another direct reconstruction neural network with an encoder-decoder architecture that forgoes any fully connected layers. They utilize convolutional layers to encode the sinogram input (288x269) into a higher dimensional feature vector representation (1024x18x17) that is then decoded with convolutional layers into a 2D image (128x128). While both of these novel methods are significant advancements in direct neural network reconstruction, memory space requirements severely limit the size of the images they can produce.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_27", "@cite_15", "@cite_12", "@cite_17" ], "mid": [ "2949634581", "2300779272", "2963633304", "2153378748" ], "abstract": [ "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.", "Abstract The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder–decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11 53 lower than ordered subset expectation maximization (OSEM) filtered back-projection (FBP), structural similarity index (1 11 higher than OSEM FBP), and peak signal-to-noise ratio (1.1 3.8 dB higher than OSEM FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder–decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.", "Feedforward multilayer networks trained by supervised learning have recently demonstrated state of the art performance on image labeling problems such as boundary prediction and scene parsing. As even very low error rates can limit practical usage of such systems, methods that perform closer to human accuracy remain desirable. In this work, we propose a new type of network with the following properties that address what we hypothesize to be limiting aspects of existing methods: (1) a wide' structure with thousands of features, (2) a large field of view, (3) recursive iterations that exploit statistical dependencies in label space, and (4) a parallelizable architecture that can be trained in a fraction of the time compared to benchmark multilayer convolutional networks. For the specific image labeling problem of boundary prediction, we also introduce a novel example weighting algorithm that improves segmentation accuracy. Experiments in the challenging domain of connectomic reconstruction of neural circuity from 3d electron microscopy data show that these \"Deep And Wide Multiscale Recursive\" (DAWMR) networks lead to new levels of image labeling performance. The highest performing architecture has twelve layers, interwoven supervised and unsupervised stages, and uses an input field of view of 157,464 voxels ( @math ) to make a prediction at each image location. We present an associated open source software package that enables the simple and flexible creation of DAWMR networks." ] }
1908.07705
2969301397
Dialogue state tracking (DST) is an essential component in task-oriented dialogue systems, which estimates user goals at every dialogue turn. However, most previous approaches usually suffer from the following problems. Many discriminative models, especially end-to-end (E2E) models, are difficult to extract unknown values that are not in the candidate ontology; previous generative models, which can extract unknown values from utterances, degrade the performance due to ignoring the semantic information of pre-defined ontology. Besides, previous generative models usually need a hand-crafted list to normalize the generated values. How to integrate the semantic information of pre-defined ontology and dialogue text (heterogeneous texts) to generate unknown values and improve performance becomes a severe challenge. In this paper, we propose a Copy-Enhanced Heterogeneous Information Learning model with multiple encoder-decoder for DST (CEDST), which can effectively generate all possible values including unknown values by copying values from heterogeneous texts. Meanwhile, CEDST can effectively decompose the large state space into several small state spaces through multi-encoder, and employ multi-decoder to make full use of the reduced spaces to generate values. Multi-encoder-decoder architecture can significantly improve performance. Experiments show that CEDST can achieve state-of-the-art results on two datasets and our constructed datasets with many unknown values.
As far as we know, @cite_4 is the only work using the discriminative model to handle the dynamic and unbounded value set. @cite_4 represents the dialogue states by candidate sets derived from the dialogue and knowledge, then scores values in the candidate set with binary classifications. Although the sophisticated generation strategy of the candidate set allows the model to extract unknown values, @cite_4 needs a separate SLU module and may propagate errors.
{ "cite_N": [ "@cite_4" ], "mid": [ "2952030765", "2964268978", "2523469089", "2806935606" ], "abstract": [ "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences (i.e., the golden target sentences), And the discriminator makes efforts to discriminate the machine-generated sentences from human-translated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-of-the-art Transformer on English-German and Chinese-English translation tasks.", "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations." ] }
1908.07674
2969763518
A significantly low cost and tractable progressive learning approach is proposed and discussed for efficient spatiotemporal monitoring of a completely unknown, two dimensional correlated signal distribution in localized wireless sensor field. The spatial distribution is compressed into a number of its contour lines and only those sensors that their sensor observations are in a @math margin of the contour levels are reporting to the information fusion center (IFC). The proposed algorithm progressively finds the model parameters in iterations, by using extrapolation in curve fitting, and stochastic gradient method for spatial monitoring. The IFC tracks the signal variations using these parameters, over time. The monitoring performance and the cost of the proposed algorithm are discussed, in this letter.
Contour line detection in wireless sensor networks, which is the first step in modeling the spatial distribution has been addressed in several researches, including @cite_13 @cite_31 @cite_1 @cite_17 @cite_18 @cite_15 @cite_26 @cite_30 @cite_12 @cite_20 @cite_16 . Most of these mentioned researches addressed to distributed-contour-detection that is based on collaboration among sensors for detection of the contour lines. In this letter, we propose a cost efficient centralized algorithm based on the approach proposed in @cite_13 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_15", "@cite_1", "@cite_16", "@cite_31", "@cite_20", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2125456263", "2136684675", "2125380743", "2085176360" ], "abstract": [ "An algorithm to extract contour lines using wireless sensor networks is proposed for environmental monitoring in this work. In contrast to previous work on edge detection that is primarily concerned with the region of a certain phenomenon, contour lines offer more detailed information about the underlying phenomenon such as signal amplitude, density and source location. A distributed algorithm to extract the contour line information from local measurements is developed so that the phenomenon can be monitored in the basestation without demanding excessive raw data transmission. Simulation results are provided to demonstrate the efficiency of the proposed algorithm.", "An algorithm to extract contour lines using wireless sensor networks is proposed in this work. In con- trast with previous work on edge detection that is primarily concerned with the region of a certain phenomenon, contour lines offer more detailed information about the underlying phenomenon such as signal's amplitude, density and source location. A distributed algorithm to extract the contour lines information from local measurements is developed so that the phenomenon can be monitored in the basestation without demanding excessive raw data transmission. Sim- ulation results are provided to demonstrate the efficiency of the proposed algorithm. Furthermore, a thorough per- formance analysis is conducted to understand the effects of the sensor density and the background noise power on the performance of the system.", "A robust filter-based approach is proposed for wireless sensor networks for detecting contours of a signal distribution over a 2-dimensional region. The motivation for contour detection is derived from applications where the spatial distribution of a signal (such as temperature, soil moisture level, etc.) is to be determined over a large region with minimum communication cost. The proposed scheme applies multi-level quantization to the sensor signal values to artificially create an edge and then applies spatial filtering for edge detection. The spatial filter is localized and is based on an adaptation of the Prewitt filter used in image processing. Appropriate mechanisms are introduced that minimizes the cost for communication required for collaboration. Simulation results are presented to show the error performance of the proposed contour detection scheme and the associated communication cost (single-hop communications with immediate neighborhood in average) in the network.", "This paper presents algorithms for efficiently detecting the variation of a distributed signal over space and time using large scale wireless sensor networks. The proposed algorithms use contours for estimating the spatial distribution of a signal. A contour tracking algorithm is proposed to efficiently monitor the variations of the contours with time. Use of contours reduces the communication cost by reducing the participation of sensor nodes for the monitoring tasks. The proposed schemes use multi-sensor collaboration techniques and non-uniform contour levels to reduce the error in reconstructing the signal distribution. Results from computer simulations are presented to demonstrate the performance of the proposed schemes." ] }
1908.07674
2969763518
A significantly low cost and tractable progressive learning approach is proposed and discussed for efficient spatiotemporal monitoring of a completely unknown, two dimensional correlated signal distribution in localized wireless sensor field. The spatial distribution is compressed into a number of its contour lines and only those sensors that their sensor observations are in a @math margin of the contour levels are reporting to the information fusion center (IFC). The proposed algorithm progressively finds the model parameters in iterations, by using extrapolation in curve fitting, and stochastic gradient method for spatial monitoring. The IFC tracks the signal variations using these parameters, over time. The monitoring performance and the cost of the proposed algorithm are discussed, in this letter.
Spatial modeling of signal distributions using contour lines has been addressed in @cite_13 @cite_31 @cite_21 . Modeling the spatial distribution with uniformly spaced contour levels and tracking their variation using time-series analysis in sensors was studied in @cite_21 . Using non-uniformly spaced contour lines was reported first in @cite_31 . They assumed the probability density function () of the signal strength and used method to calculate the optimal sub-optimal contour levels. An iterative algorithm was proposed in @cite_13 to extract the of the signal strength with low cost in spatial monitoring of the signal distribution.
{ "cite_N": [ "@cite_31", "@cite_21", "@cite_13" ], "mid": [ "2085176360", "2571527823", "2953338282", "2136684675" ], "abstract": [ "This paper presents algorithms for efficiently detecting the variation of a distributed signal over space and time using large scale wireless sensor networks. The proposed algorithms use contours for estimating the spatial distribution of a signal. A contour tracking algorithm is proposed to efficiently monitor the variations of the contours with time. Use of contours reduces the communication cost by reducing the participation of sensor nodes for the monitoring tasks. The proposed schemes use multi-sensor collaboration techniques and non-uniform contour levels to reduce the error in reconstructing the signal distribution. Results from computer simulations are presented to demonstrate the performance of the proposed schemes.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .", "An algorithm to extract contour lines using wireless sensor networks is proposed in this work. In con- trast with previous work on edge detection that is primarily concerned with the region of a certain phenomenon, contour lines offer more detailed information about the underlying phenomenon such as signal's amplitude, density and source location. A distributed algorithm to extract the contour lines information from local measurements is developed so that the phenomenon can be monitored in the basestation without demanding excessive raw data transmission. Sim- ulation results are provided to demonstrate the efficiency of the proposed algorithm. Furthermore, a thorough per- formance analysis is conducted to understand the effects of the sensor density and the background noise power on the performance of the system." ] }
1908.07674
2969763518
A significantly low cost and tractable progressive learning approach is proposed and discussed for efficient spatiotemporal monitoring of a completely unknown, two dimensional correlated signal distribution in localized wireless sensor field. The spatial distribution is compressed into a number of its contour lines and only those sensors that their sensor observations are in a @math margin of the contour levels are reporting to the information fusion center (IFC). The proposed algorithm progressively finds the model parameters in iterations, by using extrapolation in curve fitting, and stochastic gradient method for spatial monitoring. The IFC tracks the signal variations using these parameters, over time. The monitoring performance and the cost of the proposed algorithm are discussed, in this letter.
Spatiotemporal modeling using machine learning approaches has been reported in several researches, including @cite_9 @cite_5 @cite_19 . In most of these approaches, neural networks algorithms, genetics algorithms, stochastic gradient descent algorithms, etc. are employed.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_9" ], "mid": [ "1522734439", "2952633803", "2793925626", "2610668384" ], "abstract": [ "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.", "In this thesis, a recently proposed bilinear model for predicting spatiotemporal data has been implemented and extended. The model was trained in an unsupervised manner and uses spatiotemporal synchrony to encode transformations between inputs of a sequence up to a time t, in order to predict the next input at t + 1. A convolutional version of the model was developed in order to reduce the number of parameters and improve the predictive capabilities. The original and the convolutional models were tested and compared on a dataset containing videos of bouncing balls and both versions are able to predict the motion of the balls. The developed convolutional version halved the 4-step prediction loss while reducing the number of parameters by a factor of 159 compared to the original model. Some important differences between the models are discussed in the thesis and suggestions for further improvements of the convolutional model are identified and presented." ] }
1908.07489
2969242814
Online platforms, such as Airbnb, this http URL, Amazon, Uber and Lyft, can control and optimize many aspects of product search to improve the efficiency of marketplaces. Here we focus on a common model, called the discriminatory control model, where the platform chooses to display a subset of sellers who sell products at prices determined by the market and a buyer is interested in buying a single product from one of the sellers. Under the commonly-used model for single product selection by a buyer, called the multinomial logit model, and the Bertrand game model for competition among sellers, we show the following result: to maximize social welfare, the optimal strategy for the platform is to display all products; however, to maximize revenue, the optimal strategy is to only display a subset of the products whose qualities are above a certain threshold. We extend our results to Cournot competition model, and show that the optimal search segmentation mechanisms for both social welfare maximization and revenue maximization also have simple threshold structures. The threshold in each case depends on the quality of all products, the platform's objective and seller's competition model, and can be computed in linear time in the number of products.
Bertrand competition, proposed by Joseph Bertrand in 1883, and Cournot competition, introduced in 1838 by Antoine Augustin Cournot, are fundamental economic models that represent sellers competing in a single market, and have been studied comprehensively in economics. Due to the motivation that many sellers compete in more than one market in modern dynamic and diverse economy, a recent and growing literature has studied Cournot competitions in network environments @cite_30 @cite_2 @cite_20 @cite_21 . The work @cite_30 @cite_2 focused on characterizing and computing Nash equilibria, and investigated the impact of changes in the (bipartite) network structure on seller's profit and buyer's surplus. @cite_20 and @cite_21 analyzed the efficiency loss of networked Cournot competition game via the metric of price of anarchy. While all these previous works focused on the objective of social welfare maximization in networked Cournot competition, we consider the objective of both social welfare and revenue maximization in the networked Bertrand competition, and also in the networked Cournot competition. We further provide efficient segmenting mechanisms to optimize the social welfare revenue under the Nash equilibrium.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_20", "@cite_2" ], "mid": [ "1555547018", "1501018584", "2072113937", "2952244728" ], "abstract": [ "Cournot competition, introduced in 1838 by Antoine Augustin Cournot, is a fundamental economic model that represents firms competing in a single market of a homogeneous good. Each firm tries to maximize its utility—naturally a function of the production cost as well as market price of the product—by deciding on the amount of production. This problem has been studied comprehensively in Economics and Game Theory; however, in today’s dynamic and diverse economy, many firms often compete in more than one market simultaneously, i.e., each market might be shared among a subset of these firms. In this situation, a bipartite graph models the access restriction where firms are on one side, markets are on the other side, and edges demonstrate whether a firm has access to a market or not. We call this game Network Cournot Competition (NCC). Computation of equilibrium, taking into account a network of markets and firms and the different forms of cost and price functions, makes challenging and interesting new problems.", "We present a two-stage model of competing ad auctions. Search engines attract users via Cournot-style competition. Meanwhile, each advertiser must pay a participation cost to use each ad platform, and advertiser entry strategies are derived using symmetric Bayes-Nash equilibrium that lead to the VCG outcome of the ad auctions. Consistent with our model of participation costs, we find empirical evidence that multi-homing advertisers are larger than single-homing advertisers. We then link our model to search engine market conditions: We derive comparative statics on consumer choice parameters, presenting relationships between market share, quality, and user welfare. We also analyze the prospect of joining auctions to mitigate participation costs, and we characterize when such joins do and do not increase welfare.", "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria.", "We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility constraint on the probabilities of service of the agents. These environments include position auctions where slots with decreasing click-through rates are auctioned to advertisers. We generalize and characterize the envy-free benchmark from Hartline and Yan (2011) to settings with budgets and characterize the optimal envy-free outcomes for both welfare and revenue. We give prior-free mechanisms that approximate these benchmarks. A building block in our mechanism is a clinching auction for position auction environments. This auction is a generalization of the multi-unit clinching auction of (2008) and a special case of the polyhedral clinching auction of (2012). For welfare maximization, we show that this clinching auction is a good approximation to the envy-free optimal welfare for position auction environments. For profit maximization, we generalize the random sampling profit extraction auction from (2002) for digital goods to give a 10.0-approximation to the envy-free optimal revenue in symmetric, downward-closed environments. The profit maximization question is of interest even without budgets and our mechanism is a 7.5-approximation which improving on the 30.4 bound of Ha and Hartline (2012)." ] }
1908.07567
2969713569
The emerging parallel chain protocols represent a breakthrough to address the scalability of blockchain. By composing multiple parallel chain instances, the whole systems' throughput can approach the network capacity. How to coordinate different chains' blocks and to construct them into a global ordering is critical to the performance of parallel chain protocol. However, the existed solutions use either the global synchronization clock with the single-chain bottleneck or pre-defined ordering sequences with distortion of blocks' causality to order blocks. In addition, the prior ordering methods rely on that honest participants faithfully follow the ordering protocol, but remain silent for any denial of ordering (DoR) attack. On the other hand, the conflicting transactions included into the global block sequence will make Simple Payment Verification (SPV) difficult. Clients usually need to store a full record of transactions to distinguish the conflictions and tell whether transactions are confirmed. However, the requirement for a full record will greatly hinder blockchains' application, especially for mobile scenarios. In this technical report, we propose Eunomia, which leverages logical clock and fine-grained UTXO sharding to realize a simple, efficient, secure and permissionless parallel chain protocol. By observing the characteristics of the parallel chain, we find the blocks ordering issue in parallel chain has many similarities with the event ordering in the distributed system. Eunomia thus adopts "virtual" logical clock, which is optimized to have the minimum protocol overhead and runs in a distributed way. In addition, Eunomia combines the mining incentive with block ordering, providing incentive compatibility against DoR attack. What's more, the fine-grained UTXO sharding does well solve the conflicting transactions in parallel chain and is shown to be SPV-friendly.
In @cite_39 , Jiaping Wang and Hao Wang propose a protocol called Monoxide, which composes multiple independent single chain consensus systems, called zones. They also proposed eventual atomicity to ensure transaction atomicity across zones and Chu-ko-nu mining to ensure the effective mining power in each zone. Monoxide is shown to provide @math throughput and @math capacity over Bitcoin and Ethereum. Except for this work, there are some committee-based sharding protocols @cite_29 @cite_52 @cite_41 . Each shard is assigned a subset of the nodes, and nodes run the classical byzantine agreement (BA) to reach an agreement. However, these protocols only can tolerant up to @math adversaries. What's more, all sharding-based protocols have additional overhead and latency for cross-shard transactions.
{ "cite_N": [ "@cite_41", "@cite_29", "@cite_52", "@cite_39" ], "mid": [ "2897676665", "2902905458", "2046270839", "2591036807" ], "abstract": [ "A major approach to overcoming the performance and scalability limitations of current blockchain protocols is to use sharding which is to split the overheads of processing transactions among multiple, smaller groups of nodes. These groups work in parallel to maximize performance while requiring significantly smaller communication, computation, and storage per node, allowing the system to scale to large networks. However, existing sharding-based blockchain protocols still require a linear amount of communication (in the number of participants) per transaction, and hence, attain only partially the potential benefits of sharding. We show that this introduces a major bottleneck to the throughput and latency of these protocols. Aside from the limited scalability, these protocols achieve weak security guarantees due to either a small fault resiliency (e.g., 1 8 and 1 4) or high failure probability, or they rely on strong assumptions (e.g., trusted setup) that limit their applicability to mainstream payment systems. We propose RapidChain, the first sharding-based public blockchain protocol that is resilient to Byzantine faults from up to a 1 3 fraction of its participants, and achieves complete sharding of the communication, computation, and storage overhead of processing transactions without assuming any trusted setup. RapidChain employs an optimal intra-committee consensus algorithm that can achieve very high throughputs via block pipelining, a novel gossiping protocol for large blocks, and a provably-secure reconfiguration mechanism to ensure robustness. Using an efficient cross-shard transaction verification technique, our protocol avoids gossiping transactions to the entire network. Our empirical evaluations suggest that RapidChain can process (and confirm) more than 7,300 tx sec with an expected confirmation latency of roughly 8.7 seconds in a network of 4,000 nodes with an overwhelming time-to-failure of more than 4,500 years.", "This paper introduces a new leaderless Byzantine consensus called the Democratic Byzantine Fault Tolerance (DBFT) for blockchains. While most blockchain consensus protocols rely on a correct leader or coordinator to terminate, our algorithm can terminate even when its coordinator is faulty. The key idea is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting decentralization is particularly appealing for blockchains for two reasons: (i) each node plays a similar role in the execution of the consensus, hence making the decision inherently “democratic” (ii) decentralization avoids bottlenecks by balancing the load, making the solution scalable. DBFT is deterministic, assumes partial synchrony, is resilience optimal, time optimal and does not need signatures. We first present a simple safe binary Byzantine consensus algorithm, modify it to ensure termination, and finally present an optimized reduction from multivalue consensus to binary consensus whose fast path terminates in 4 message delays.", "We are interested in the design of automated procedures for analyzing the (in)security of cryptographic protocols in the Dolev-Yao model for a bounded number of sessions when we take into account some algebraic properties satisfied by the operators involved in the protocol. This leads to a more realistic model in comparison to what we get under the perfect cryptography assumption, but it implies that protocol analysis deals with terms modulo some equational theory instead of terms in a free algebra. The main goal of this paper is to setup a general approach that works for a whole class of monoidal theories which contains many of the specific cases that have been considered so far in an ad-hoc way (e.g. exclusive or, Abelian groups, exclusive or in combination with the homomorphism axiom). We follow a classical schema for cryptographic protocol analysis which proves first a locality result and then reduces the insecurity problem to a symbolic constraint solving problem. This approach strongly relies on the correspondence between a monoidal theory E and a semiring S\"E which we use to deal with the symbolic constraints. We show that the well-defined symbolic constraints that are generated by reasonable protocols can be solved provided that unification in the monoidal theory satisfies some additional properties. The resolution process boils down to solving particular quadratic Diophantine equations that are reduced to linear Diophantine equations, thanks to linear algebra results and the well-definedness of the problem. Examples of theories that do not satisfy our additional properties appear to be undecidable, which suggests that our characterization is reasonably tight.", "We consider distributed plurality consensus in a complete graph of size @math with @math initial opinions. We design an efficient and simple protocol in the asynchronous communication model that ensures that all nodes eventually agree on the initially most frequent opinion. In this model, each node is equipped with a random Poisson clock with parameter @math . Whenever a node's clock ticks, it samples some neighbors, uniformly at random and with replacement, and adjusts its opinion according to the sample. A prominent example is the so-called two-choices algorithm in the synchronous model, where in each round, every node chooses two neighbors uniformly at random, and if the two sampled opinions coincide, then that opinion is adopted. This protocol is very efficient and well-studied when @math . If @math for some small @math , we show that it converges to the initial plurality opinion within @math rounds, w.h.p., as long as the initial difference between the largest and second largest opinion is @math . On the other side, we show that there are cases in which @math rounds are needed, w.h.p. One can beat this lower bound in the synchronous model by combining the two-choices protocol with randomized broadcasting. Our main contribution is a non-trivial adaptation of this approach to the asynchronous model. If the support of the most frequent opinion is at least @math times that of the second-most frequent one and @math , then our protocol achieves the best possible run time of @math , w.h.p. We relax full synchronicity by allowing @math nodes to be poorly synchronized, and the well synchronized nodes are only required to be within a certain time difference from one another. We enforce this synchronicity by introducing a novel gadget into the protocol." ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Action recognition in the deep learning era has been successfully tackled with 2D @cite_12 @cite_1 and 3D CNNs @cite_31 @cite_29 @cite_0 @cite_28 @cite_40 @cite_15 @cite_30 . Most existing works focus on modeling of motion and temporal structures. @cite_12 the optical flow CNN is introduced to model short-term motion patterns. TSN @cite_7 models long-range temporal structures using a sparse segment sampling in the whole video during training. 3D CNN based models @cite_31 @cite_0 @cite_28 @cite_15 tackle the temporal modeling using the added dimension of the convolution on the temporal axis, in the hope that the models will learn the hierarchical motion patters as in the image space. Several recent works have started to decouple the spatial and temporal convolution in 3D CNNs to achieve more explicit temporal modeling @cite_29 @cite_14 @cite_30 . In @cite_16 @cite_17 the temporal modeling is further improved by tracking feature points or body joints over time.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_7", "@cite_15", "@cite_28", "@cite_29", "@cite_1", "@cite_0", "@cite_40", "@cite_31", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2963315828", "2950971447", "2507009361", "2751445731" ], "abstract": [ "Deep convolutional networks have achieved great success for image recognition. However, for action recognition in videos, their advantage over traditional methods is not so evident. We present a general and flexible video-level framework for learning action models in videos. This method, called temporal segment network (TSN), aims to model long-range temporal structures with a new segment-based sampling and aggregation module. This unique design enables our TSN to efficiently learn action models by using the whole action videos. The learned models could be easily adapted for action recognition in both trimmed and untrimmed videos with simple average pooling and multi-scale temporal window integration, respectively. We also study a series of good practices for the instantiation of TSN framework given limited training samples. Our approach obtains the state-the-of-art performance on four challenging action recognition benchmarks: HMDB51 (71.0 ), UCF101 (94.9 ), THUMOS14 (80.1 ), and ActivityNet v1.2 (89.6 ). Using the proposed RGB difference for motion models, our method can still achieve competitive accuracy on UCF101 (91.0 ) while running at 340 FPS. Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.", "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( @math ) and UCF101 ( @math ). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.", "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).", "3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 @math 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion , which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets)." ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Most of these methods treat action recognition as a video classification problem. These works tend to focus on how motion is captured by the networks and largely ignore what makes the actions unique. In this work, we provide insights specific to the nature of the action recognition problem itself, showing how it requires an increased sensitivity to finer details. Different from the methods above, our work is explicitly designed for fine-grained action classification. In particular, the proposed approach is inspired by recent advances in the fine-grained recognition literature, such as @cite_33 . We hope that this work will help draw the attention of the community on understanding generic action classes as a fine-grained recognition problem.
{ "cite_N": [ "@cite_33" ], "mid": [ "1932571874", "2146048167", "2486913577", "1981781955" ], "abstract": [ "Whereas the action recognition problem has become a hot topic within computer vision, the detection of fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric centers or even embedded in camera phones. Recent work has considered the well-known Bag-of-Words framework often used in generic action recognition for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which near 90 accuracy rates were achieved for this specific task, the computational cost of extracting such features is prohibitive for practical applications, particularly in surveillance and media rating systems. The task of violence detection may have, however, specific features that can be leveraged. Inspired by psychology results that suggest that kinematic features alone are discriminant for specific actions, this work proposes a novel method which uses extreme acceleration patterns as the main feature. These extreme accelerations are efficiently estimated by applying the Radon transform to the power spectrum of consecutive frames. Experiments show that accuracy improvements of up to 12 are achieved with respect to state-of-the-art generic action recognition methods. Most importantly, the proposed method is at least 15 times faster.", "Action recognition has often been posed as a classification problem, which assumes that a video sequence only have one action class label and different actions are independent. However, a single human body can perform multiple concurrent actions at the same time, and different actions interact with each other. This paper proposes a concurrent action detection model where the action detection is formulated as a structural prediction problem. In this model, an interval in a video sequence can be described by multiple action labels. An detected action interval is determined both by the unary local detector and the relations with other actions. We use a wavelet feature to represent the action sequence, and design a composite temporal logic descriptor to describe the action relations. The model parameters are trained by structural SVM learning. Given a long video sequence, a sequential decision window search algorithm is designed to detect the actions. Experiments on our new collected concurrent action dataset demonstrate the strength of our method.", "We consider the problem of detecting and localizing a human action from continuous action video from depth cameras. We believe that this problem is more challenging than the problem of traditional action recognition as we do not have the information about the starting and ending frames of an action class. Another challenge which makes the problem difficult, is the latency in detection of actions. In this paper, we introduce a greedy approach to detect the action class, invariant of their temporal scale in the testing sequences using class templates and basic skeleton based feature representation from the depth stream data generated using Microsoft Kinect. We evaluate the proposed method on the standard G3D and UTKinect-Action datasets consisting of five and ten actions, respectively. Our results demonstrate that the proposed approach performs well for action detection and recognition under different temporal scales, and is able to outperform the state of the art methods at low latency.", "Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison." ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Understanding human activities as a fine-grained recognition problem has been explored for some domain specific tasks @cite_6 @cite_37 . For example, some works have been proposed for hand-gesture recognition @cite_18 @cite_38 @cite_32 , daily life activity recognition @cite_39 and sports understanding @cite_26 @cite_2 @cite_8 @cite_4 . All these works build ad hoc solutions specific to the action domain they are addressing. Instead, we present a solution for generic action recognition and show that this can also be treated as a fine-grained recognition problem and that it can benefit from learning fine-grained information.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_18", "@cite_26", "@cite_4", "@cite_8", "@cite_32", "@cite_6", "@cite_39", "@cite_2" ], "mid": [ "2156798932", "2176302750", "2086001523", "2019660985" ], "abstract": [ "Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.", "The recognition of human activities is one of the key problems in video understanding. Action recognition is challenging even for specific categories of videos, such as sports, that contain only a small set of actions. Interestingly, sports videos are accompanied by detailed commentaries available online, which could be used to perform action annotation in a weakly-supervised setting. For the specific case of Cricket videos, we address the challenge of temporal segmentation and annotation of ctions with semantic descriptions. Our solution consists of two stages. In the first stage, the video is segmented into \"scenes\", by utilizing the scene category information extracted from text-commentary. The second stage consists of classifying video-shots as well as the phrases in the textual description into various categories. The relevant phrases are then suitably mapped to the video-shots. The novel aspect of this work is the fine temporal scale at which semantic information is assigned to the video. As a result of our approach, we enable retrieval of specific actions that last only a few seconds, from several hours of video. This solution yields a large number of labeled exemplars, with no manual effort, that could be used by machine learning algorithms to learn complex actions.", "This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including fighting' and assault', which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.", "While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition." ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Different from common object categories such as those in ImageNet @cite_21 , this field cares for objects that look visually very similar and that can only be differentiated by learning their finer details. Some examples including distinguishing bird @cite_5 and plant @cite_9 species and recognizing different car models @cite_35 @cite_23 . We refer to Zhao et. al @cite_10 for an interesting and complete survey on this topic. Here we just remark the importance of learning the visual details of these fine-grained classes. Some works achieve this with various pooling techniques @cite_36 @cite_19 ; some use part-based approaches @cite_3 @cite_11 ; and others use attention mechanisms @cite_20 @cite_41 . More recently, Wang et. al @cite_33 proposed to use a set of @math convolution layers as a discriminative filter bank and use a spatial max pooling to find the location of the fine-grained information. Our method takes inspiration from this method and extends it to the task of video action recognition. We emphasize the importance of finding the fine-grained details in the spatio-temporal domain with high resolutions features.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_36", "@cite_41", "@cite_9", "@cite_21", "@cite_3", "@cite_19", "@cite_23", "@cite_5", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2773003563", "2964189431", "2737725206", "1980526845" ], "abstract": [ "Recognizing fine-grained categories (e.g., bird species) highly relies on discriminative part localization and part-based fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that part localization (e.g., head of a bird) and fine-grained feature learning (e.g., head shape) are mutually correlated. In this paper, we propose a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other. MA-CNN consists of convolution, channel grouping and part classification sub-networks. The channel grouping network takes as input feature channels from convolutional layers, and generates multiple parts by clustering, weighting and pooling from spatially-correlated channels. The part classification network further classifies an image by each individual part, through which more discriminative fine-grained features can be learned. Two losses are proposed to guide the multi-task learning of channel grouping and part classification, which encourages MA-CNN to generate more discriminative parts from feature channels and learn better fine-grained features from parts in a mutual reinforced way. MA-CNN does not need bounding box part annotation and can be trained end-to-end. We incorporate the learned parts from MA-CNN with part-CNN for recognition, and show the best performances on three challenging published fine-grained datasets, e.g., CUB-Birds, FGVC-Aircraft and Stanford-Cars.", "Recent algorithms in convolutional neural networks (CNN) considerably advance the fine-grained image classification, which aims to differentiate subtle differences among subordinate classes. However, previous studies have rarely focused on learning a fined-grained and structured feature representation that is able to locate similar images at different levels of relevance, e.g., discovering cars from the same make or the same model, both of which require high precision. In this paper, we propose two main contributions to tackle this problem. 1) A multitask learning framework is designed to effectively learn fine-grained feature representations by jointly optimizing both classification and similarity constraints. 2) To model the multi-level relevance, label structures such as hierarchy or shared attributes are seamlessly embedded into the framework by generalizing the triplet loss. Extensive and thorough experiments have been conducted on three finegrained datasets, i.e., the Stanford car, the Car-333, and the food datasets, which contain either hierarchical labels or shared attributes. Our proposed method has achieved very competitive performance, i.e., among state-of-the-art classification accuracy when not using parts. More importantly, it significantly outperforms previous fine-grained feature representations for image retrieval at different levels of relevance.", "Recognizing fine-grained categories (e.g., bird species) is difficult due to the challenges of discriminative region localization and fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that region detection and fine-grained feature learning are mutually correlated and thus can reinforce each other. In this paper, we propose a novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way. The learning at each scale consists of a classification sub-network and an attention proposal sub-network (APN). The APN starts from full images, and iteratively generates region attention from coarse to fine by taking previous prediction as a reference, while the finer scale network takes as input an amplified attended region from previous scale in a recurrent way. The proposed RA-CNN is optimized by an intra-scale classification loss and an inter-scale ranking loss, to mutually learn accurate region attention and fine-grained representation. RA-CNN does not need bounding box part annotations and can be trained end-to-end. We conduct comprehensive experiments and show that RA-CNN achieves the best performance in three fine-grained tasks, with relative accuracy gains of 3.3 , 3.7 , 3.8 , on CUB Birds, Stanford Dogs and Stanford Cars, respectively.", "As a special topic in computer vision, fine-grained visual categorization (FGVC) has been attracting growing attention these years. Different with traditional image classification tasks in which objects have large inter-class variation, the visual concepts in the fine-grained datasets, such as hundreds of bird species, often have very similar semantics. Due to the large inter-class similarity, it is very difficult to classify the objects without locating really discriminative features, therefore it becomes more important for the algorithm to make full use of the part information in order to train a robust model. In this paper, we propose a powerful flowchart named Hierarchical Part Matching (HPM) to cope with fine-grained classification tasks. We extend the Bag-of-Features (BoF) model by introducing several novel modules to integrate into image representation, including foreground inference and segmentation, Hierarchical Structure Learning (HSL), and Geometric Phrase Pooling (GPP). We verify in experiments that our algorithm achieves the state-of-the-art classification accuracy in the Caltech-UCSD-Birds-200-2011 dataset by making full use of the ground-truth part annotations." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
The transfer learning literature explores a number of different topics and strategies such as few-shot learning @cite_19 @cite_29 , domain adaptation @cite_6 , weight synthesis @cite_2 , and multi-task learning @cite_33 @cite_24 @cite_8 . Some works propose novel combinations of these approaches, yielding new training architectures and optimization objectives to improve transfer performance under conditions of domain transfer with limited or incomplete annotations @cite_10 .
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_29", "@cite_6", "@cite_24", "@cite_19", "@cite_2", "@cite_10" ], "mid": [ "1919803322", "2557626841", "2745420784", "2835011589" ], "abstract": [ "Transfer learning has benefited many real-world applications where labeled data are abundant in source domains but scarce in the target domain. As there are usually multiple relevant domains where knowledge can be transferred, multiple source transfer learning MSTL has recently attracted much attention. However, we are facing two major challenges when applying MSTL. First, without knowledge about the difference between source and target domains, negative transfer occurs when knowledge is transferred from highly irrelevant sources. Second, existence of imbalanced distributions in classes, where examples in one class dominate, can lead to improper judgement on the source domains' relevance to the target task. Since existing MSTL methods are usually designed to transfer from relevant sources with balanced distributions, they will fail in applications where these two challenges persist. In this article, we propose a novel two-phase framework to effectively transfer knowledge from multiple sources even when there exists irrelevant sources and imbalanced class distributions. First, an effective supervised local weight scheme is proposed to assign a proper weight to each source domain's classifier based on its ability of predicting accurately on each local region of the target domain. The second phase then learns a classifier for the target domain by solving an optimization problem which concerns both training error minimization and consistency with weighted predictions gained from source domains. A theoretical analysis shows that as the number of source domains increases, the probability that the proposed approach has an error greater than a bound is becoming exponentially small. We further extend the proposed approach to an online processing scenario to conduct transfer learning on continuously arriving data. Extensive experiments on disease prediction, spam filtering and intrusion detection datasets demonstrate that: i the proposed two-phase approach outperforms existing MSTL approaches due to its ability of tackling negative transfer and imbalanced distribution challenges, and ii the proposed online approach achieves comparable performance to the offline scheme.", "In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second-or even higher-order scatter tensors, one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains) and RGB-D combined with Caltech256 (depth-to-rgb transfer). We attain state-of-the-art results.", "Transfer learning borrows knowledge from a source domain to facilitate learning in a target domain. Two primary issues to be addressed in transfer learning are what and how to transfer. For a pair of domains, adopting different transfer learning algorithms results in different knowledge transferred between them. To discover the optimal transfer learning algorithm that maximally improves the learning performance in the target domain, researchers have to exhaustively explore all existing transfer learning algorithms, which is computationally intractable. As a trade-off, a sub-optimal algorithm is selected, which requires considerable expertise in an ad-hoc way. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we first learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer for a newly arrived pair of domains by optimizing the reflection function. Extensive experiments demonstrate the L2T's superiority over several state-of-the-art transfer learning algorithms and its effectiveness on discovering more transferable knowledge.", "Multi-source transfer learning has been proven effective when within-target labeled data is scarce. Previous work focuses primarily on exploiting domain similarities and assumes that source domains are richly or at least comparably labeled. While this strong assumption is never true in practice, this paper relaxes it and addresses challenges related to sources with diverse labeling volume and diverse reliability. The first challenge is combining domain similarity and source reliability by proposing a new transfer learning method that utilizes both source-target similarities and inter-source relationships. The second challenge involves pool-based active learning where the oracle is only available in source domains, resulting in an integrated active transfer learning framework that incorporates distribution matching and uncertainty sampling. Extensive experiments on synthetic and two real-world datasets clearly demonstrate the superiority of our proposed methods over several baselines including state-of-the-art transfer learning methods. Code related to this paper is available at: https: github.com iedwardwangi ReliableMSTL." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
Several approaches have been tried to transfer robust representations based on large numbers of examples to new tasks. These transfer learning approaches share a common intuition @cite_28 : that networks which have learned compact representations of a "source" task, can reuse these representations to achieve higher performance on a related "target" task. Different approaches use different techniques to transfer previous representations. approaches attempt to identify appropriate data used in the source task to supplement target task training, approaches attempt to leverage source task weight matrices, and approaches involve re-using the architecture or hyper-parameters of the source network @cite_32 @cite_18 . These approaches, often supplemented by related small-data techniques such as bootstrapping, can yield improvements in performance @cite_21 @cite_1 @cite_24 @cite_0 @cite_14 . One approach to transfer learning is to leverage existing deep nets trained on a large dataset, for example VGG16 DBLP:journals corr RazavianASC14 @cite_0 for images, or PCNN @cite_20 for relation prediction. The trained weights in these networks have captured a representation of the input that can be transferred by fine-tuning the weights or retraining the final dense layer of the network on the new task.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_21", "@cite_1", "@cite_32", "@cite_24", "@cite_0", "@cite_20" ], "mid": [ "2526782364", "2203224402", "2963749571", "2743157634" ], "abstract": [ "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.", "Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: “different instances but a similar viewpoint and category” and “different viewpoints of the same instance”. By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.", "Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
While all these methods seek to improve performance on the target task by transfer from the source task, most assume there is only one source model, usually trained from ImageNet. @cite_15 Additionally, this approach involves a number of meta-learning decisions, although in general each change from the original source architecture tends to decrease resulting classification performance @cite_14 . Meta-learning @cite_5 is another approach for representation transfer. While meta-learning typically deals with training a base model on a variety of different learning tasks, transfer learning is about learning from multiple related learning tasks @cite_17 . Efficiency of transfer learning depends on the right source data selection, whereas meta-learning models could suffer from 'negative transfer' @cite_18 of knowledge if source and target domains are unrelated. Surprisingly, in image classification performance gains are commonly observed even in cases where initialization data appears visually and semantically different from the target dataset (such as ImageNet and Medical Imaging datasets).
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "2157032359", "2798381792", "2768591600", "1919803322" ], "abstract": [ "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.", "Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification challenge. Central to the success of our approach is a training scheme that uses higher image resolution and deals with the long-tailed distribution of training data. Next, we study transfer learning via fine-tuning from large scale datasets to small scale, domain-specific FGVC datasets. We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Our proposed transfer learning outperforms ImageNet pre-training and obtains state-of-the-art results on multiple commonly used FGVC datasets.", "In human learning, it is common to use multiple sources of information jointly. However, most existing feature learning approaches learn from only a single task. In this paper, we propose a novel multi-task deep network to learn generalizable high-level visual representations. Since multitask learning requires annotations for multiple properties of the same training instance, we look to synthetic images to train our network. To overcome the domain difference between real and synthetic data, we employ an unsupervised feature space domain adaptation method based on adversarial learning. Given an input synthetic RGB image, our network simultaneously predicts its surface normal, depth, and instance contour, while also minimizing the feature space domain differences between real and synthetic data. Through extensive experiments, we demonstrate that our network learns more transferable representations compared to single-task baselines. Our learned representation produces state-of-the-art transfer learning results on PASCAL VOC 2007 classification and 2012 detection.", "Transfer learning has benefited many real-world applications where labeled data are abundant in source domains but scarce in the target domain. As there are usually multiple relevant domains where knowledge can be transferred, multiple source transfer learning MSTL has recently attracted much attention. However, we are facing two major challenges when applying MSTL. First, without knowledge about the difference between source and target domains, negative transfer occurs when knowledge is transferred from highly irrelevant sources. Second, existence of imbalanced distributions in classes, where examples in one class dominate, can lead to improper judgement on the source domains' relevance to the target task. Since existing MSTL methods are usually designed to transfer from relevant sources with balanced distributions, they will fail in applications where these two challenges persist. In this article, we propose a novel two-phase framework to effectively transfer knowledge from multiple sources even when there exists irrelevant sources and imbalanced class distributions. First, an effective supervised local weight scheme is proposed to assign a proper weight to each source domain's classifier based on its ability of predicting accurately on each local region of the target domain. The second phase then learns a classifier for the target domain by solving an optimization problem which concerns both training error minimization and consistency with weighted predictions gained from source domains. A theoretical analysis shows that as the number of source domains increases, the probability that the proposed approach has an error greater than a bound is becoming exponentially small. We further extend the proposed approach to an online processing scenario to conduct transfer learning on continuously arriving data. Extensive experiments on disease prediction, spam filtering and intrusion detection datasets demonstrate that: i the proposed two-phase approach outperforms existing MSTL approaches due to its ability of tackling negative transfer and imbalanced distribution challenges, and ii the proposed online approach achieves comparable performance to the offline scheme." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
Our approach is most similar to that of fine-tuning with co-training @cite_14 . That method begins by using low-level features to identify images within a source dataset having similar textures to a target dataset, and concludes by using a multi-task objective to fine-tune the target task using these images. A related approach has been used to enhance performance and reduce training time in document classification @cite_25 and to identify examples to supplement training data @cite_16 @cite_14 . Our goal is to extend this approach to high-level features, and to domains outside computer vision to construct a more complete map of the feature space of a trained network. In this way our approach has some parallels with "learning to transfer" approaches @cite_34 , which attempt to train a source model optimized for transfer rather than target accuracy.
{ "cite_N": [ "@cite_34", "@cite_14", "@cite_25", "@cite_16" ], "mid": [ "2591924527", "2798381792", "2964055354", "1731081199" ], "abstract": [ "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2 - 10 using a single model. Codes and models are available at https: github.com ZYYSzj Selective-Joint-Fine-tuning.", "Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification challenge. Central to the success of our approach is a training scheme that uses higher image resolution and deals with the long-tailed distribution of training data. Next, we study transfer learning via fine-tuning from large scale datasets to small scale, domain-specific FGVC datasets. We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Our proposed transfer learning outperforms ImageNet pre-training and obtains state-of-the-art results on multiple commonly used FGVC datasets.", "In this paper, we make two contributions to unsupervised domain adaptation (UDA) using the convolutional neural network (CNN). First, our approach transfers knowledge in all the convolutional layers through attention alignment. Most previous methods align high-level representations, e.g., activations of the fully connected (FC) layers. In these methods, however, the convolutional layers which underpin critical low-level domain knowledge cannot be updated directly towards reducing domain discrepancy. Specifically, we assume that the discriminative regions in an image are relatively invariant to image style changes. Based on this assumption, we propose an attention alignment scheme on all the target convolutional layers to uncover the knowledge shared by the source domain. Second, we estimate the posterior label distribution of the unlabeled data for target network training. Previous methods, which iteratively update the pseudo labels by the target network and refine the target network by the updated pseudo labels, are vulnerable to label estimation errors. Instead, our approach uses category distribution to calculate the cross-entropy loss for training, thereby ameliorating the error accumulation of the estimated labels. The two contributions allow our approach to outperform the state-of-the-art methods by +2.6 on the Office-31 dataset.", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application." ] }
1908.07195
2969990170
Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.
As mentioned above, MLE suffers from the exposure bias problem @cite_3 @cite_33 . Thus, reinforcement learning has been introduced to text generation tasks such as policy gradient @cite_33 and actor-critic @cite_20 . @cite_37 proposed an efficient and stable approach called Reward Augmented Maximum Likelihood (RAML), which connects the log-likelihood and expected rewards to incorporate MLE training objective into RL framework.
{ "cite_N": [ "@cite_37", "@cite_20", "@cite_33", "@cite_3" ], "mid": [ "2964352247", "2953220522", "1925816294", "2487501366" ], "abstract": [ "We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a textit critic network that is trained to predict the value of an output token, given the policy of an textit actor network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-specific score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.", "This paper presents a novel form of policy gradient for model-free reinforcement learning (RL) with improved exploration properties. Current policy-based methods use entropy regularization to encourage undirected exploration of the reward landscape, which is ineffective in high dimensional spaces with sparse rewards. We propose a more directed exploration strategy that promotes exploration of under-appreciated reward regions. An action sequence is considered under-appreciated if its log-probability under the current policy under-estimates its resulting reward. The proposed exploration strategy is easy to implement, requiring small modifications to an implementation of the REINFORCE algorithm. We evaluate the approach on a set of algorithmic tasks that have long challenged RL methods. Our approach reduces hyper-parameter sensitivity and demonstrates significant improvements over baseline methods. Our algorithm successfully solves a benchmark multi-digit addition task and generalizes to long sequences. This is, to our knowledge, the first time that a pure RL method has solved addition using only reward feedback.", "With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.", "We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a network that is trained to predict the value of an output token, given the policy of an network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-specific score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling." ] }
1908.07195
2969990170
Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.
The most similar works to our model are RAML @cite_37 and MaliGAN @cite_1 : 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text generation tasks, particularly those without explicit evaluation metrics. 2) Unlike MaliGAN, we acquire samples from a fixed distribution near the real data rather than the generator's distribution, which is expected to make the training process more stable.
{ "cite_N": [ "@cite_37", "@cite_1" ], "mid": [ "2953343755", "2964268978", "2523469089", "2962912551" ], "abstract": [ "Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation. Models are trained using teacher forcing to optimise only the one-step-ahead prediction. However, at test time, the model is asked to generate a whole sequence, causing errors to propagate through the generation process (exposure bias). A number of authors have proposed countering this bias by optimising for a reward that is less tightly coupled to the training data, using reinforcement learning. We optimise directly for quality metrics, including a novel approach using a discriminator learned directly from the training data. We confirm that policy gradient methods can be used to decouple training from the ground truth, leading to increases in the metrics used as rewards. We perform a human evaluation, and show that although these metrics have previously been assumed to be good proxies for question quality, they are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source.", "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "Binary classifiers are employed as discriminators in GAN-based unsupervised style transfer models to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with the binary discriminator is that error signal is sometimes insufficient to train the model to produce rich-structured language. In this paper, we propose a technique of using a target domain language model as the discriminator to provide richer, token-level feedback during the learning process. Because our language model scores sentences directly using a product of locally normalized probabilities, it offers more stable and more useful training signal to the generator. We train the generator to minimize the negative log likelihood (NLL) of generated sentences evaluated by a language model. By using continuous approximation of the discrete samples, our model can be trained using back-propagation in an end-to-end way. Moreover, we find empirically with a language model as a structured discriminator, it is possible to eliminate the adversarial training steps using negative samples, thus making training more stable. We compare our model with previous work using convolutional neural networks (CNNs) as discriminators and show our model outperforms them significantly in three tasks including word substitution decipherment, sentiment modification and related language translation." ] }
1908.07325
2969224942
Recognizing multiple labels of images is a practical and challenging task, and significant progress has been made by searching semantic-aware regions and modeling label dependency. However, current methods cannot locate the semantic regions accurately due to the lack of part-level supervision or semantic guidance. Moreover, they cannot fully explore the mutual interactions among the semantic regions and do not explicitly model the label co-occurrence. To address these issues, we propose a Semantic-Specific Graph Representation Learning (SSGRL) framework that consists of two crucial modules: 1) a semantic decoupling module that incorporates category semantics to guide learning semantic-specific representations and 2) a semantic interaction module that correlates these representations with a graph built on the statistical label co-occurrence and explores their interactions via a graph propagation mechanism. Extensive experiments on public benchmarks show that our SSGRL framework outperforms current state-of-the-art methods by a sizable margin, e.g. with an mAP improvement of 2.5 , 2.6 , 6.7 , and 3.1 on the PASCAL VOC 2007 & 2012, Microsoft-COCO and Visual Genome benchmarks, respectively. Our codes and models are available at this https URL.
Recent progress on multi-label image classification relies on the combination of object localization and deep learning techniques @cite_28 @cite_10 . Generally, they introduced object proposals @cite_22 that were assumed to contain all possible foreground objects in the image and aggregated features extracted from all these proposals to incorporate local information. Although these methods achieved notable performance improvement, the step of region candidate localization usually incurred redundant computation cost and prevented the model from end-to-end training with deep neural networks. @cite_14 further utilized a learning based region proposal network and integrated it with deep neural networks. Although this method could be jointly optimized, it required additional annotations of bounding boxes to train the proposal generation component. To solve this issue, some other works @cite_29 @cite_20 @cite_29 resorted to attention mechanism to locate the informative regions, and these methods could be trained with image level annotations in an end-to-end manner. For example, @cite_20 introduced spatial transformer to adaptively search semantic-aware regions and then aggregated features from these regions to identify multiple labels. However, due to the lack of supervision and guidance, these methods could merely locate the regions roughly.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_29", "@cite_10", "@cite_20" ], "mid": [ "2950461853", "2410641892", "2560096627", "2963300078" ], "abstract": [ "Deep convolution neural networks (CNN) have demonstrated advanced performance on single-label image classification, and various progress also have been made to apply CNN methods on multi-label image classification, which requires to annotate objects, attributes, scene categories etc. in a single shot. Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a Regional Latent Semantic Dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly-dependent labels. The localized regions are further sent to the recurrent neural networks (RNN) to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurred in the images. In addition, we set up an upper bound model (RLSD+ft-RPN) using bounding box coordinates during training, the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.", "Deep convolution neural networks (CNNs) have demonstrated advanced performance on single-label image classification, and various progress also has been made to apply CNN methods on multilabel image classification, which requires annotating objects, attributes, scene categories, etc., in a single shot. Recent state-of-the-art approaches to the multilabel image classification exploit the label dependencies in an image, at the global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a regional latent semantic dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly dependent labels. The localized regions are further sent to the recurrent neural networks to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurring in the images. Also, we set up an upper bound model (RLSD+ft-RPN) using bounding-box coordinates during training, and the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world.", "This paper proposes a novel deep architecture to address multi-label image recognition, a fundamental and practical task towards general visual understanding. Current solutions for this task usually rely on an extra step of extracting hypothesis regions (i.e., region proposals), resulting in redundant computation and sub-optimal performance. In this work, we achieve the interpretable and contextualized multi-label image classification by developing a recurrent memorized-attention module. This module consists of two alternately performed components: i) a spatial transformer layer to locate attentional regions from the convolutional feature maps in a region-proposal-free way and ii) an LSTM (Long-Short Term Memory) sub-network to sequentially predict semantic labeling scores on the located regions while capturing the global dependencies of these regions. The LSTM also output the parameters for computing the spatial transformer. On large-scale benchmarks of multi-label image classification (e.g., MS-COCO and PASCAL VOC 07), our approach demonstrates superior performances over other existing state-of-the-arts in both accuracy and efficiency." ] }
1908.04441
2969162116
RGB-Thermal object tracking attempt to locate target object using complementary visual and thermal infrared data. Existing RGB-T trackers fuse different modalities by robust feature representation learning or adaptive modal weighting. However, how to integrate dual attention mechanism for visual tracking is still a subject that has not been studied yet. In this paper, we propose two visual attention mechanisms for robust RGB-T object tracking. Specifically, the local attention is implemented by exploiting the common visual attention of RGB and thermal data to train deep classifiers. We also introduce the global attention, which is a multi-modal target-driven attention estimation network. It can provide global proposals for the classifier together with local proposals extracted from previous tracking result. Extensive experiments on two RGB-T benchmark datasets validated the effectiveness of our proposed algorithm.
RGB-T tracking receives more and more attention in computer vision community with the popularity of thermal infrared sensors. Wu @cite_15 concatenate the image patches from RGB and thermal sources, and then sparsely represent each sample in the target template space for tracking. Modal weights are introduced for each source to represent the image quality, and combine with the sparse representation in Bayesian filtering framework to perform object tracking @cite_0 . Zhu @cite_17 propose a quality-aware feature aggregation network to fuse multi-layer deep feature and multimodal information adaptively for RGB-T tracking. Although these works also achieve good performance on RGB-T benchmarks, however, they still adopt local search strategy for target localization and seldom of them consider explore long-term visual attentions for their tracker.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_17" ], "mid": [ "2896228140", "2901716381", "2775609985", "2577056945" ], "abstract": [ "Due to the complementary benefits of visible (RGB) and thermal infrared (T) data, RGB-T object tracking attracts more and more attention recently for boosting the performance under adverse illumination conditions. Existing RGB-T tracking methods usually localize a target object with a bounding box, in which the trackers or detectors is often affected by the inclusion of background clutter. To address this problem, this paper presents a novel approach to suppress background effects for RGB-T tracking. Our approach relies on a novel cross-modal manifold ranking algorithm. First, we integrate the soft cross-modality consistency into the ranking model which allows the sparse inconsistency to account for the different properties between these two modalities. Second, we propose an optimal query learning method to handle label noises of queries. In particular, we introduce an intermediate variable to represent the optimal labels, and formulate it as a (l_1 )-optimization based sparse learning problem. Moreover, we propose a single unified optimization algorithm to solve the proposed model with stable and efficient convergence behavior. Finally, the ranking results are incorporated into the patch-based object features to address the background effects, and the structured SVM is then adopted to perform RGB-T tracking. Extensive experiments suggest that the proposed approach performs well against the state-of-the-art methods on large-scale benchmark datasets.", "This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGB-T tracking). We propose a novel deep network architecture \"quality-aware Feature Aggregation Network (FANet)\" to achieve quality-aware aggregations of both hierarchical features and multimodal information for robust online RGB-T tracking. Unlike existing works that directly concatenate hierarchical deep features, our FANet learns the layer weights to adaptively aggregate them to handle the challenge of significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion within each modality. Moreover, we employ the operations of max pooling, interpolation upsampling and convolution to transform these hierarchical and multi-resolution features into a uniform space at the same resolution for more effective feature aggregation. In different modalities, we elaborately design a multimodal aggregation sub-network to integrate all modalities collaboratively based on the predicted reliability degrees. Extensive experiments on large-scale benchmark datasets demonstrate that our FANet significantly outperforms other state-of-the-art RGB-T tracking methods.", "This paper investigates how to integrate the complementary information from RGB and thermal (RGB-T) sources for object tracking. We propose a novel Convolutional Neural Network (ConvNet) architecture, including a two-stream ConvNet and a FusionNet, to achieve adaptive fusion of different source data for robust RGB-T tracking. Both RGB and thermal streams extract generic semantic information of the target object. In particular, the thermal stream is pre-trained on the ImageNet dataset to encode rich semantic information, and then fine-tuned using thermal images to capture the specific properties of thermal information. For adaptive fusion of different modalities while avoiding redundant noises, the FusionNet is employed to select most discriminative feature maps from the outputs of the two-stream ConvNet, and updated online to adapt to appearance variations of the target object. Finally, the object locations are efficiently predicted by applying the multi-channel correlation filter on the fused feature maps. Extensive experiments on the recently public benchmark GTOT verify the effectiveness of the proposed approach against other state-of-the-art RGB-T trackers.", "This paper studies the problem of object tracking in challenging scenarios by leveraging multimodal visual data. We propose a grayscale-thermal object tracking method in Bayesian filtering framework based on multitask Laplacian sparse representation. Given one bounding box, we extract a set of overlapping local patches within it, and pursue the multitask joint sparse representation for grayscale and thermal modalities. Then, the representation coefficients of the two modalities are concatenated into a vector to represent the feature of the bounding box. Moreover, the similarity between each patch pair is deployed to refine their representation coefficients in the sparse representation, which can be formulated as the Laplacian sparse representation. We also incorporate the modal reliability into the Laplacian sparse representation to achieve an adaptive fusion of different source data. Experiments on two grayscale-thermal datasets suggest that the proposed approach outperforms both grayscale and grayscale-thermal tracking approaches." ] }
1908.04441
2969162116
RGB-Thermal object tracking attempt to locate target object using complementary visual and thermal infrared data. Existing RGB-T trackers fuse different modalities by robust feature representation learning or adaptive modal weighting. However, how to integrate dual attention mechanism for visual tracking is still a subject that has not been studied yet. In this paper, we propose two visual attention mechanisms for robust RGB-T object tracking. Specifically, the local attention is implemented by exploiting the common visual attention of RGB and thermal data to train deep classifiers. We also introduce the global attention, which is a multi-modal target-driven attention estimation network. It can provide global proposals for the classifier together with local proposals extracted from previous tracking result. Extensive experiments on two RGB-T benchmark datasets validated the effectiveness of our proposed algorithm.
Attention mechanism originates from the study of human congnitive neuroscience @cite_9 . In visual tracking, the cosine window map @cite_8 and Gaussian window map @cite_3 are widely used in DCF tracker to suppress the boundary effect, which can interpreted as one type of visual spatial attention. For short-time tracking, DAVT @cite_5 used a discriminative spatial attention and ACFN @cite_6 developed an attentional mechanism that chose a subset of the associated correlation filters for visual tracking. Recently, RASNet @cite_13 integrates the spatial attention, channel attention and residual attention to achieve the state-of-the-art tracking accuracy. Different from these works, we propose a novel local and global attention for robust RGB-T tracking.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_6", "@cite_3", "@cite_5", "@cite_13" ], "mid": [ "2891894830", "2964198573", "2964105113", "2887556118" ], "abstract": [ "The field of object detection has made great progress in recent years. Most of these improvements are derived from using a more sophisticated convolutional neural network. However, in the case of humans, the attention mechanism, global structure information, and local details of objects all play an important role for detecting an object. In this paper, we propose a novel fully convolutional network, named as Attention CoupleNet, to incorporate the attention-related information and global and local information of objects to improve the detection performance. Specifically, we first design a cascade attention structure to perceive the global scene of the image and generate class-agnostic attention maps. Then the attention maps are encoded into the network to acquire object-aware features. Next, we propose a unique fully convolutional coupling structure to couple global structure and local parts of the object to further formulate a discriminative feature representation. To fully explore the global and local properties, we also design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local information. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging data sets, i.e., a mAP of 85.7 on VOC07, 84.3 on VOC12, and 35.4 on COCO. Codes are publicly available at https: github.com tshizys CoupleNet .", "Discriminative correlation filters (DCF) with deep convolutional features have achieved favorable performance in recent tracking benchmarks. However, most of existing DCF trackers only consider appearance features of current frame, and hardly benefit from motion and inter-frame information. The lack of temporal information degrades the tracking performance during challenges such as partial occlusion and deformation. In this paper, we propose the FlowTrack, which focuses on making use of the rich flow information in consecutive frames to improve the feature representation and the tracking accuracy. The FlowTrack formulates individual components, including optical flow estimation, feature extraction, aggregation and correlation filters tracking as special layers in network. To the best of our knowledge, this is the first work to jointly train flow and tracking task in deep learning framework. Then the historical feature maps at predefined intervals are warped and aggregated with current ones by the guiding of flow. For adaptive aggregation, we propose a novel spatial-temporal attention mechanism. In experiments, the proposed method achieves leading performance on OTB2013, OTB2015, VOT2015 and VOT2016.", "In this paper, we propose to incorporate convolutional neural networks with a multi-context attention mechanism into an end-to-end framework for human pose estimation. We adopt stacked hourglass networks to generate attention maps from features at multiple resolutions with various semantics. The Conditional Random Field (CRF) is utilized to model the correlations among neighboring regions in the attention map. We further combine the holistic attention model, which focuses on the global consistency of the full human body, and the body part attention model, which focuses on detailed descriptions for different body parts. Hence our model has the ability to focus on different granularity from local salient regions to global semantic consistent spaces. Additionally, we design novel Hourglass Residual Units (HRUs) to increase the receptive field of the network. These units are extensions of residual units with a side branch incorporating filters with larger receptive field, hence features with various scales are learned and combined within the HRUs. The effectiveness of the proposed multi-context attention mechanism and the hourglass residual units is evaluated on two widely used human pose estimation benchmarks. Our approach outperforms all existing methods on both benchmarks over all the body parts. Code has been made publicly available.", "Abstract Visual tracking algorithms based on structured output support vector machine (SOSVM) have demonstrated excellent performance. However, sampling methods and optimization strategies of SOSVM undesirably increase the computational overloads, which hinder real-time application of these algorithms. Moreover, due to the lack of high-dimensional features and dense training samples, SOSVM-based algorithms are unstable to deal with various challenging scenarios, such as occlusions and scale variations. Recently, visual tracking algorithms based on discriminative correlation filters (DCF), especially the combination of DCF and features from deep convolutional neural networks (CNN), have been successfully applied to visual tracking, and attains surprisingly good performance on recent benchmarks. The success is mainly attributed to two aspects: the circular correlation properties of DCF and the powerful representation capabilities of CNN features. Nevertheless, compared with SOSVM, DCF-based algorithms are restricted to simple ridge regression which has a weaker discriminative ability. In this paper, a novel circular and structural operator tracker (CSOT) is proposed for high performance visual tracking, it not only possesses the powerful discriminative capability of SOSVM but also efficiently inherits the superior computational efficiency of DCF. Based on the proposed circular and structural operators, a set of primal confidence score maps can be obtained by circular correlating feature maps with their corresponding structural correlation filters. Furthermore, an implicit interpolation is applied to convert the multi-resolution feature maps to the continuous domain and make all primal confidence score maps have the same spatial resolution. Then, we exploit an efficient ensemble post-processor based on relative entropy, which can coalesce primal confidence score maps and create an optimal confidence score map for more accurate localization. The target is localized on the peak of the optimal confidence score map. Besides, we introduce a collaborative optimization strategy to update circular and structural operators by iteratively training structural correlation filters, which significantly reduces computational complexity and improves robustness. Experimental results demonstrate that our approach achieves state-of-the-art performance in mean AUC scores of 71.5 and 69.4 on the OTB2013 and OTB2015 benchmarks respectively, and obtains a third-best expected average overlap (EAO) score of 29.8 on the VOT2017 benchmark." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Input modification methods observe the changes in the outputs based on changes to the inputs. This can be done using perturbed variants @cite_0 @cite_40 , masks @cite_36 , or noise @cite_1 . For instance, Zeiler and Fergus @cite_36 create heatmaps based on the drop in prediction probability from input images with masked patches at different places. The problem with these methods is that they require exhaustive input modifications and can be computationally costly.
{ "cite_N": [ "@cite_0", "@cite_40", "@cite_1", "@cite_36" ], "mid": [ "2055349880", "2045105614", "2154822588", "2607406448" ], "abstract": [ "In this paper, we propose techniques to make use of two complementary bottom-up features, image edges and texture patches, to guide top-down object segmentation towards higher precision. We build upon the part-based pose-let detector, which can predict masks for numerous parts of an object. For this purpose we extend poselets to 19 other categories apart from person. We non-rigidly align these part detections to potential object contours in the image, both to increase the precision of the predicted object mask and to sort out false positives. We spatially aggregate object information via a variational smoothing technique while ensuring that object regions do not overlap. Finally, we propose to refine the segmentation based on self-similarity defined on small image patches. We obtain competitive results on the challenging Pascal VOC benchmark. On four classes we achieve the best numbers to-date.", "In this paper, a new object-based method to estimate noise in magnitude MR images is proposed. The main advantage of this object-based method is its robustness to background artefacts such as ghosting. The proposed method is based on the adaptation of the Median Absolute Deviation (MAD) estimator in the wavelet domain for Rician noise. The MAD is a robust and efficient estimator initially proposed to estimate Gaussian noise. In this work, the adaptation of MAD operator for Rician noise is performed by using only the wavelet coefficients corresponding to the object and by correcting the estimation with an iterative scheme based on the SNR of the image. During the evaluation, a comparison of the proposed method with several state-of-the-art methods is performed. A quantitative validation on synthetic phantom with and without artefacts is presented. A new validation framework is proposed to perform quantitative validation on real data. The impact of the accuracy of noise estimation on the performance of a denoising filter is also studied. The results obtained on synthetic images show the accuracy and the robustness of the proposed method. Within the validation on real data, the proposed method obtained very competitive results compared to the methods under study.", "This paper presents a method for solving inverse mapping of a continuous function learned by a multilayer feedforward mapping network. The method is based on the iterative update of input vector toward a solution, while escaping from local minima. The input vector update is determined by the pseudo-inverse of the gradient of Lyapunov function, and, should an optimal solution be searched for, the projection of the gradient of a performance index on the null space of the gradient of Lyapunov function. The update rule is allowed to detect an input vector approaching local minima through a phenomenon called \"update explosion\". At or near local minima, the input vector is guided by an escape trajectory generated based on \"global information\", where global information is referred to here as predefined or known information on forward mapping; or the input vector is relocated to a new position based on the probability density function (PDF) constructed over the input vector space by Parzen estimate. The constructed PDF reflects the history of local minima detected during the search process, and represents the probability that a particular input vector can lead to a solution based on the update rule. The proposed method has a substantial advantage in computational complexity as well as convergence property over the conventional methods based on Jacobian pseudo-inverse or Jacobian transpose. >", "We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the 'gradient' component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of pixel images in about 0.4 s using a single graphics processing unit (GPU)." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Back propagation-based methods trace the contribution of the output backwards through the network to the input. In the classic example, DeconvNet @cite_36 @cite_35 back propagates the output through the network. However, instead of using the Rectified Linear Unit (ReLU) from the forward pass, DeconvNet applies ReLU to the back propagated signal. Guided Back Propagation @cite_15 is similar to DeconvNet except it blocks the negative values from for the forward and the backwards signals.
{ "cite_N": [ "@cite_36", "@cite_35", "@cite_15" ], "mid": [ "1855112655", "1498436455", "2519766107", "2964115671" ], "abstract": [ "Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders, called difference target propagation, is very effective to make target propagation actually work, leading to results comparable to back-propagation for deep networks with discrete and continuous units and denoising auto-encoders and achieving state of the art for stochastic networks.", "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.", "This paper proposes an alternating back-propagation algorithm for learning the generator network model. The model is a non-linear generalization of factor analysis. In this model, the mapping from the continuous latent factors to the observed signal is parametrized by a convolutional neural network. The alternating back-propagation algorithm iterates the following two steps: (1) Inferential back-propagation, which infers the latent factors by Langevin dynamics or gradient descent. (2) Learning back-propagation, which updates the parameters given the inferred latent factors by gradient descent. The gradient computations in both steps are powered by back-propagation, and they share most of their code in common. We show that the alternating back-propagation algorithm can learn realistic generator models of natural images, video sequences, and sounds. Moreover, it can also be used to learn from incomplete or indirect training data.", "Artificial neural networks are most commonly trained with the back-propagation algorithm, where the gradient for learning is provided by back-propagating the error, layer by layer, from the output layer to the hidden layers. A recently discovered method called feedback-alignment shows that the weights used for propagating the error backward don't have to be symmetric with the weights used for propagation the activation forward. In fact, random feedback weights work evenly well, because the network learns how to make the feedback useful. In this work, the feedback alignment principle is used for training hidden layers more independently from the rest of the network, and from a zero initial condition. The error is propagated through fixed random feedback connections directly from the output layer to each hidden layer. This simple method is able to achieve zero training error even in convolutional networks and very deep networks, completely without error back-propagation. The method is a step towards biologically plausible machine learning because the error signal is almost local, and no symmetric or reciprocal weights are required. Experiments show that the test performance on MNIST and CIFAR is almost as good as those obtained with back-propagation for fully connected networks. If combined with dropout, the method achieves 1.45 error on the permutation invariant MNIST task." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Using the gradients of activation functions to back propagate relevance can sometimes lead to misleading contribution attributions due to discontinuous or vanishing gradients. In order to overcome this, instead of using the gradients to back propagate the relevance, Deep Learning Important FeaTures (DeepLIFT) @cite_21 uses the difference between the activations of reference inputs. SHapeley Additive exPlanation (SHAP) @cite_25 extends DeepLIFT to include Shapely approximations. In addition, another way to solve this problem is the use of Integrated Gradients @cite_26 @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_25", "@cite_26" ], "mid": [ "2605409611", "2952688545", "2346578521", "2594633041" ], "abstract": [ "The purported \"black box\" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http: goo.gl qKb7pL, code: http: goo.gl RM8jvH.", "The purported \"black box\"' nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. A detailed video tutorial on the method is at this http URL and code is at this http URL.", "The purported \"black box\" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Learning Important FeaTures), an efficient and effective method for computing importance scores in a neural network. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. We apply DeepLIFT to models trained on natural images and genomic data, and show significant advantages over gradient-based methods.", "We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Gradient-Weighted CAM (Grad-CAM) @cite_24 is a generalization of CAM that can target any layer and introduces the gradient information to CAM. The problem with CAM-based methods is that they specifically target high-level layers. Therefore, hybrid methods, such as Guided Grad-CAM @cite_24 , combine the qualities of CAMs with pixel-wise methods, such as guided back propagation.
{ "cite_N": [ "@cite_24" ], "mid": [ "2549418531", "2962858109", "2616247523", "2963152423" ], "abstract": [ "We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing input regions that are 'important' for predictions -- or visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses class-specific gradient information to localize important regions. These localizations are combined with existing pixel-space visualizations to create a novel high-resolution and class-discriminative visualization called Guided Grad-CAM. These methods help better understand CNN-based models, including image captioning and visual question answering (VQA) models. We evaluate our visual explanations by measuring their ability to discriminate between classes, to inspire trust in humans, and their correlation with occlusion maps. Grad-CAM provides a new way to understand CNN-based models. We have released code, an online demo hosted on CloudCV, and a full version of this extended abstract.", "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E.", "We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.", "Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as \"black box\" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
There are also many other visualization and explanation methods for neural networks. For example, by observing the maximal activations of layers, it is possible to visualize contributing regions of feature maps @cite_36 @cite_7 or the use of attention visualization @cite_30 @cite_34 . In addition, the hidden layers of networks can be analyzed using lower-dimensional representations, such as dimensionality reduction @cite_8 , Relative Neighbor Graphs (RNG) @cite_28 , matrix factorization @cite_9 , and modular representation by community detection @cite_44 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_8", "@cite_36", "@cite_28", "@cite_9", "@cite_44", "@cite_34" ], "mid": [ "2769943106", "2963081790", "2950275949", "1960777822" ], "abstract": [ "The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. However, CNNs often criticized as being black boxes that lack interpretability, since they have millions of unexplained model parameters. In this work, we describe Network Dissection, a method that interprets networks by providing labels for the units of their deep visual representations. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors. The method reveals that deep representations are more transparent and interpretable than expected: we find that representations are significantly more interpretable than they would be under a random equivalently powerful basis. We apply the method to interpret and compare the latent representations of various network architectures trained to solve different supervised and self-supervised training tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initializations, and the network depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a prediction given by a CNN for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into their hierarchical structure.", "The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and visual semantic concepts. By identifying the best alignments, units are given interpretable labels ranging from colors, materials, textures, parts, objects and scenes. The method reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis. We apply our approach to interpret and compare the latent representations of several network architectures trained to solve a wide range of supervised and self-supervised tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initialization parameters, as well as networks depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a given CNN prediction for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into what hierarchical structures can learn.", "In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the gen- eration of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate; it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the- art detection performance.", "A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations." ] }
1908.06868
2969140094
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
When it comes to graphs, a natural approach to define a latent representation is a structure-based approach. It consists in defining the notion of frequency on a graph by analogy with the Fourier Transform (FT) @cite_2 . The so-called Graph Fourier Transform (GFT) represents the signal in another basis. This basis is built according to the graph structure without looking at the data at all. The eigenvectors of the Laplacian matrix of the graph are used as a new basis. These eigenvectors are ordered by increasing eigenvalues. Small eigenvalues correspond intuitively to low frequencies, and large eigenvalues to high frequencies. Consequently, to reduce the data dimension, only some of the first eigenvectors are kept. Data is back projected using only those first eigenvectors, and a low-dimensional representation of the signal is obtained. However, the analogy with the FT is not complete, as for instance, the GFT using a 2-dimensional grid does not match the 2D classical FT @cite_6 .
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2127475907", "1971368490", "2030643321", "1690143098" ], "abstract": [ "We extend spectral methods to random graphs with skewed degree distributions through a degree based normalization closely connected to the normalized Laplacian. The normalization is based on intuition drawn from perturbation theory of random matrices, and has the effect of boosting the expectation of the random adjacency matrix without increasing the variances of its entries, leading to better perturbation bounds. The primary implication of this result lies in the realm of spectral analysis of random graphs with skewed degree distributions, such as the ubiquitous \"power law graphs\". Mihail and Papadimitriou (2002) argued that for randomly generated graphs satisfying a power law degree distribution, spectral analysis of the adjacency matrix simply produces the neighborhoods of the high degree nodes as its eigenvectors, and thus miss any embedded structure. We present a generalization of their model, incorporating latent structure, and prove that after applying our transformation, spectral analysis succeeds in recovering the latent structure with high probability.", "We propose a novel discrete signal processing framework for the representation and analysis of datasets with complex structure. Such datasets arise in many social, economic, biological, and physical networks. Our framework extends traditional discrete signal processing theory to structured datasets by viewing them as signals represented by graphs, so that signal coefficients are indexed by graph nodes and relations between them are represented by weighted graph edges. We discuss the notions of signals and filters on graphs, and define the concepts of the spectrum and Fourier transform for graph signals. We demonstrate their relation to the generalized eigenvector basis of the graph adjacency matrix and study their properties. As a potential application of the graph Fourier transform, we consider the efficient representation of structured data that utilizes the sparseness of graph signals in the frequency domain.", "Signal processing on graph is attracting more and more attentions. For a graph signal in the low-frequency subspace, the missing data associated with unsampled vertices can be reconstructed through the sampled data by exploiting the smoothness of the graph signal. In this paper, the concept of local set is introduced and two local-set-based iterative methods are proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, one of the proposed methods reweights the sampled residuals for different vertices, while the other propagates the sampled residuals in their respective local sets. These algorithms are built on frame theory and the concept of local sets, based on which several frames and contraction operators are proposed. We then prove that the reconstruction methods converge to the original signal under certain conditions and demonstrate the new methods lead to a significantly faster convergence compared with the baseline method. Furthermore, the correspondence between graph signal sampling and time-domain irregular sampling is analyzed comprehensively, which may be helpful to future works on graph signals. Computer simulations are conducted. The experimental results demonstrate the effectiveness of the reconstruction methods in various sampling geometries, imprecise priori knowledge of cutoff frequency, and noisy scenarios.", "A new scheme to sample signals defined on the nodes of a graph is proposed. The underlying assumption is that such signals admit a sparse representation in a frequency domain related to the structure of the graph, which is captured by the so-called graph-shift operator. Instead of using the value of the signal observed at a subset of nodes to recover the signal in the entire graph, the sampling scheme proposed here uses as input observations taken at a single node. The observations correspond to sequential applications of the graph-shift operator, which are linear combinations of the information gathered by the neighbors of the node. When the graph corresponds to a directed cycle (which is the support of time-varying signals), our method is equivalent to the classical sampling in the time domain. When the graph is more general, we show that the Vandermonde structure of the sampling matrix, critical when sampling time-varying signals, is preserved. Sampling and interpolation are analyzed first in the absence of noise, and then noise is considered. We then study the recovery of the sampled signal when the specific set of frequencies that is active is not known. Moreover, we present a more general sampling scheme, under which, either our aggregation approach or the alternative approach of sampling a graph signal by observing the value of the signal at a subset of nodes can be both viewed as particular cases. Numerical experiments illustrating the results in both synthetic and real-world graphs close the paper." ] }
1908.06868
2969140094
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
Finding the best graph structure for a given problem is a question that recently sparked a lot of interest in the literature @cite_7 @cite_11 @cite_12 . In many cases, a good solution consists in using the covariance matrix (or its inverse) as the adjacency matrix of the graph. Following this lead, in this paper, we make use of a semi-geometric graph. It combines the geometry of images captured through a regular grid graph with the covariance of the pixels measured on different frames. As such, the support of the graph remains the same as that of the grid graph, but the weights are replaced by the covariance between corresponding pixels. Therefore, this graph takes into account both the data structure and the data distribution.
{ "cite_N": [ "@cite_12", "@cite_7", "@cite_11" ], "mid": [ "2962826552", "1976160665", "1641749581", "2075984423" ], "abstract": [ "Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an l1-regularization term. In this paper, we propose a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions. Moreover, our algorithm obtains an e-optimal solution in O(1 e) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms.", "Graph clustering has been widely applied in exploring regularities emerging in relational data. Recently, the rapid development of network theory correlates graph clustering with the detection of community structure, a common and important topological characteristic of networks. Most existing methods investigate the community structure at a single topological scale. However, as shown by empirical studies, the community structure of real world networks often exhibits multiple topological descriptions, corresponding to the clustering at different resolutions. Furthermore, the detection of multiscale community structure is heavily affected by the heterogeneous distribution of node degree. It is very challenging to detect multiscale community structure in heterogeneous networks. In this paper, we propose a novel, unified framework for detecting community structure from the perspective of dimensionality reduction. Based on the framework, we first prove that the well-known Laplacian matrix for network partition and the widely-used modularity matrix for community detection are two kinds of covariance matrices used in dimensionality reduction. We then propose a novel method to detect communities at multiple topological scales within our framework. We further show that existing algorithms fail to deal with heterogeneous node degrees. We develop a novel method to handle heterogeneity of networks by introducing a rescaling transformation into the covariance matrices in our framework. Extensive tests on real world and artificial networks demonstrate that the proposed correlation matrices significantly outperform Laplacian and modularity matrices in terms of their ability to identify multiscale community structure in heterogeneous networks.", "This paper proposes a novel approach named AGM to efficiently mine the association rules among the frequently appearing substructures in a given graph data set. A graph transaction is represented by an adjacency matrix, and the frequent patterns appearing in the matrices are mined through the extended algorithm of the basket analysis. Its performance has been evaluated for the artificial simulation data and the carcinogenesis data of Oxford University and NTP. Its high efficiency has been confirmed for the size of a real-world problem.", "results on the spectra of adjacency matrices corresponding to models of real-world graphs. We find that when the number of links grows as the number of nodes, the spectral density of uncorrelated random matrices does not converge to the semicircle law. Furthermore, the spectra of real-world graphs have specific features, depending on the details of the corresponding models. In particular, scale-free graphs develop a trianglelike spectral density with a power-law tail, while small-world graphs have a complex spectral density consisting of several sharp peaks. These and further results indicate that the spectra of correlated graphs represent a practical tool for graph classification and can provide useful insight into the relevant structural properties of real networks." ] }
1908.06868
2969140094
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
In this work, much of the interest in the latent representation is motivated by its use for sequence time prediction. There are, again, several recent works in this area, based on diverse methods (dictionary learning @cite_4 , source separation @cite_3 ). Again, we restrict the study to a simple question: what is the best linear, structure-based or data-driven, representation for predicting future frames using a LSTM?
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2952453038", "2116435618", "2175030374", "2104246439" ], "abstract": [ "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow.", "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Prior research shows that the presence of thumbnails in search results help people locate articles of interest online @cite_19 @cite_21 @cite_8 , particularly when paired with informative titles, text snippets, and URLs. Thumbnails are a particularly important signal of relevance when some links to articles have them and others do not. @cite_19 further showed that presenting thumbnails without accompanying text leads to worse search performance than simply presenting text without a thumbnail. Thumbnails also trigger behavior that would otherwise not occur in their absence. For instance, @cite_24 observed that when an email contains a link to a video file, more people click on the link when it is accompanied by a thumbnail.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_21", "@cite_8" ], "mid": [ "2139182464", "2147252560", "2512435841", "1984274118" ], "abstract": [ "We introduce a technique for creating novel, textually-enhanced thumbnails of Web pages. These thumbnails combine the advantages of image thumbnails and text summaries to provide consistent performance on a variety of tasks. We conducted a study in which participants used three different types of summaries (enhanced thumbnails, plain thumbnails, and text summaries) to search Web pages to find several different types of information. Participants took an average of 67, 86, and 95 seconds to find the answer with enhanced thumbnails, plain thumbnails, and text summaries, respectively. We found a strong effect of question category. For some questions, text outperformed plain thumbnails, while for other questions, plain thumbnails outperformed text. Enhanced thumbnails (which combine the features of text summaries and plain thumbnails) were more consistent than either text summaries or plain thumbnails, having for all categories the best performance or performance that was statistically indistinguishable from the best.", "We investigated the efficacy of visual and textual web page previews in predicting the helpfulness of web pages related to a specific topic. We ran two studies in the usability lab and collected data through an online survey. Participants (total of 245) were asked to rate the expected helpfulness of a web page based on a preview (four different thumbnail variations: a textual web page summary, a thumbnail title URL combination, a title URL combination). In the lab studies, the same participants also rated the helpfulness of the actual web pages themselves. In the online study, the web page ratings were collected from a separate group of participants. Our results show that thumbnails add information about the relevance of web pages that is not available in the textual summaries of web pages (title, snippet & URL). However, showing only thumbnails, with no textual information, results in poorer performance than showing only textual summaries. The prediction inaccuracy caused by textual vs. visual previews was different: textual previews tended to make users overestimate the helpfulness of web pages, whereas thumbnails made users underestimate the helpfulness of web pages in most cases. In our study, the best performance was obtained by combining sufficiently large thumbnails (at least 200x200 pixels) with page titles and URLs - and it was better to make users focus primarily on the thumbnail by placing the title and URL below the thumbnail. Our studies highlighted four key aspects that affect the performance of previews: the visual textual mode of the previews, the zoom level and size of the thumbnail, as well as the positioning of key information elements.", "Thumbnails play such an important role in online videos. As the most representative snapshot, they capture the essence of a video and provide the first impression to the viewers; ultimately, a great thumbnail makes a video more attractive to click and watch. We present an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content and superior visual aesthetic quality. Our system selects attractive thumbnails by analyzing various visual quality and aesthetic metrics of video frames, and performs a clustering analysis to determine the relevance to video content, thus making the resulting thumbnails more representative of the video. On the task of predicting thumbnails chosen by professional video editors, we demonstrate the effectiveness of our system against six baseline methods, using a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition, we study what makes a frame a good thumbnail by analyzing the statistical relationship between thumbnail frames and non-thumbnail frames in terms of various image quality features. Our study suggests that the selection of a good thumbnail is highly correlated with objective visual quality metrics, such as the frame texture and sharpness, implying the possibility of building an automatic thumbnail selection system based on visual aesthetics.", "We describe an empirical evaluation of the utility of thumbnail previews in web search results. Results pages were constructed to show text-only summaries, thumbnail previews only, or the combination of text summaries and thumbnail previews. We found that in the combination case, users were able to make more accurate decisions about the potential relevance of results than in either of the other versions, with hardly any increase in speed of processing the page as a whole." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Beyond web search and email reading, thumbnails also appear in the context of navigating other forms of media, from file systems @cite_9 to documents @cite_13 and videos @cite_0 . Prior work in the visualization and visual analytics community has also incorporated thumbnails into the sensemaking process as a means of leveraging the unique advantages of spatial memory @cite_4 @cite_6 @cite_12 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_6", "@cite_0", "@cite_13", "@cite_12" ], "mid": [ "1958932515", "2139182464", "2512435841", "1990089425" ], "abstract": [ "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.", "We introduce a technique for creating novel, textually-enhanced thumbnails of Web pages. These thumbnails combine the advantages of image thumbnails and text summaries to provide consistent performance on a variety of tasks. We conducted a study in which participants used three different types of summaries (enhanced thumbnails, plain thumbnails, and text summaries) to search Web pages to find several different types of information. Participants took an average of 67, 86, and 95 seconds to find the answer with enhanced thumbnails, plain thumbnails, and text summaries, respectively. We found a strong effect of question category. For some questions, text outperformed plain thumbnails, while for other questions, plain thumbnails outperformed text. Enhanced thumbnails (which combine the features of text summaries and plain thumbnails) were more consistent than either text summaries or plain thumbnails, having for all categories the best performance or performance that was statistically indistinguishable from the best.", "Thumbnails play such an important role in online videos. As the most representative snapshot, they capture the essence of a video and provide the first impression to the viewers; ultimately, a great thumbnail makes a video more attractive to click and watch. We present an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content and superior visual aesthetic quality. Our system selects attractive thumbnails by analyzing various visual quality and aesthetic metrics of video frames, and performs a clustering analysis to determine the relevance to video content, thus making the resulting thumbnails more representative of the video. On the task of predicting thumbnails chosen by professional video editors, we demonstrate the effectiveness of our system against six baseline methods, using a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition, we study what makes a frame a good thumbnail by analyzing the statistical relationship between thumbnail frames and non-thumbnail frames in terms of various image quality features. Our study suggests that the selection of a good thumbnail is highly correlated with objective visual quality metrics, such as the frame texture and sharpness, implying the possibility of building an automatic thumbnail selection system based on visual aesthetics.", "With the fast rising of the video sharing websites, the online video becomes an important media for people to share messages, interests, ideas, beliefs, etc. In this paper, we propose a novel approach to dynamically generate the web video thumbnails according to user's query. Two issues are addressed: the video content representativeness of the selected video thumbnail, and the relationship between the selected video thumbnail and the user's query. For the first issue the reinforcement based algorithm is adopted to rank the frames in each video. For the second issue the relevance model based method is employed to calculate the similarity between the video frames and the query keywords. The final video thumbnail is generated by linear fusion of the above two scores. Compared with the existing web video thumbnails, which only reflect the preference of the video owner, the thumbnails generated in our approach not only consider the video content representativeness of the frame, but also reflect the intention of the video searcher. In order to show the effectiveness of the proposed method, experiments are conducted on the videos selected from the video sharing website. Experimental results and subjective evaluations demonstrate that the proposed method is effective and can meet the user's intention requirement." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
When browsing unfamiliar content, such as in the case of online news reading, thumbnails must compete for the reader's attention with one another and with other content. We therefore turn to other prior research examining specific aspects of thumbnail design. Several factors appear to impact how thumbnails draw attention and their ultimate utility, such as thumbnail size and the inclusion of text within the thumbnail. @cite_7 studied thumbnail size, concluding that the optimal thumbnail size depends on the task they are intended to support. For instance, they posit that a thumbnail should be larger than 96x96 pixels in order to trigger recognition among those revisiting a page containing multiple thumbnails.
{ "cite_N": [ "@cite_7" ], "mid": [ "2512435841", "2139182464", "1958932515", "2147252560" ], "abstract": [ "Thumbnails play such an important role in online videos. As the most representative snapshot, they capture the essence of a video and provide the first impression to the viewers; ultimately, a great thumbnail makes a video more attractive to click and watch. We present an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content and superior visual aesthetic quality. Our system selects attractive thumbnails by analyzing various visual quality and aesthetic metrics of video frames, and performs a clustering analysis to determine the relevance to video content, thus making the resulting thumbnails more representative of the video. On the task of predicting thumbnails chosen by professional video editors, we demonstrate the effectiveness of our system against six baseline methods, using a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition, we study what makes a frame a good thumbnail by analyzing the statistical relationship between thumbnail frames and non-thumbnail frames in terms of various image quality features. Our study suggests that the selection of a good thumbnail is highly correlated with objective visual quality metrics, such as the frame texture and sharpness, implying the possibility of building an automatic thumbnail selection system based on visual aesthetics.", "We introduce a technique for creating novel, textually-enhanced thumbnails of Web pages. These thumbnails combine the advantages of image thumbnails and text summaries to provide consistent performance on a variety of tasks. We conducted a study in which participants used three different types of summaries (enhanced thumbnails, plain thumbnails, and text summaries) to search Web pages to find several different types of information. Participants took an average of 67, 86, and 95 seconds to find the answer with enhanced thumbnails, plain thumbnails, and text summaries, respectively. We found a strong effect of question category. For some questions, text outperformed plain thumbnails, while for other questions, plain thumbnails outperformed text. Enhanced thumbnails (which combine the features of text summaries and plain thumbnails) were more consistent than either text summaries or plain thumbnails, having for all categories the best performance or performance that was statistically indistinguishable from the best.", "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.", "We investigated the efficacy of visual and textual web page previews in predicting the helpfulness of web pages related to a specific topic. We ran two studies in the usability lab and collected data through an online survey. Participants (total of 245) were asked to rate the expected helpfulness of a web page based on a preview (four different thumbnail variations: a textual web page summary, a thumbnail title URL combination, a title URL combination). In the lab studies, the same participants also rated the helpfulness of the actual web pages themselves. In the online study, the web page ratings were collected from a separate group of participants. Our results show that thumbnails add information about the relevance of web pages that is not available in the textual summaries of web pages (title, snippet & URL). However, showing only thumbnails, with no textual information, results in poorer performance than showing only textual summaries. The prediction inaccuracy caused by textual vs. visual previews was different: textual previews tended to make users overestimate the helpfulness of web pages, whereas thumbnails made users underestimate the helpfulness of web pages in most cases. In our study, the best performance was obtained by combining sufficiently large thumbnails (at least 200x200 pixels) with page titles and URLs - and it was better to make users focus primarily on the thumbnail by placing the title and URL below the thumbnail. Our studies highlighted four key aspects that affect the performance of previews: the visual textual mode of the previews, the zoom level and size of the thumbnail, as well as the positioning of key information elements." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Other prior research has considered how to automatically generate thumbnails using photos and images appearing in the article, such as by cropping, resizing, or selecting the most salient excerpts from them @cite_5 @cite_30 . More recently, @cite_37 introduced an algorithm to select highly salient and evocative thumbnails for videos. As this research was conducted with generic images, we see our work as a step toward the automatic generation of visualization thumbnails. This first requires a better understanding of current practices in thumbnail design.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_37" ], "mid": [ "2512435841", "1958932515", "2514658199", "2060502770" ], "abstract": [ "Thumbnails play such an important role in online videos. As the most representative snapshot, they capture the essence of a video and provide the first impression to the viewers; ultimately, a great thumbnail makes a video more attractive to click and watch. We present an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content and superior visual aesthetic quality. Our system selects attractive thumbnails by analyzing various visual quality and aesthetic metrics of video frames, and performs a clustering analysis to determine the relevance to video content, thus making the resulting thumbnails more representative of the video. On the task of predicting thumbnails chosen by professional video editors, we demonstrate the effectiveness of our system against six baseline methods, using a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition, we study what makes a frame a good thumbnail by analyzing the statistical relationship between thumbnail frames and non-thumbnail frames in terms of various image quality features. Our study suggests that the selection of a good thumbnail is highly correlated with objective visual quality metrics, such as the frame texture and sharpness, implying the possibility of building an automatic thumbnail selection system based on visual aesthetics.", "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.", "In this paper, we propose a framework for automatically producing thumbnails from stereo image pairs. It has two components focusing respectively on stereo saliency detection and stereo thumbnail generation. The first component analyzes stereo saliency through various saliency stimuli, stereoscopic perception and the relevance between two stereo views. The second component uses stereo saliency to guide stereo thumbnail generation. We develop two types of thumbnail generation methods, both changing image size automatically. The first method is called content-persistent cropping (CPC), which aims at cropping stereo images for display devices with different aspect ratios while preserving as much content as possible. The second method is an object-aware cropping method (OAC) for generating the smallest possible thumbnail pair that retains the most important content only and facilitates quick visual exploration of a stereo image database. Quantitative and qualitative experimental evaluations demonstrate promising performance of our thumbnail generation methods in comparison to state-of-the-art algorithms.", "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Visualization is increasingly prevalent in news media @cite_36 . In this context, the communicative intent of visualization often leads to different design choices than those used in the context of data analysis @cite_33 . As a result, we encounter substantial use of graphical and text-based annotation @cite_26 . We also see the embellishment of charts @cite_38 @cite_3 with human-recognizable objects @cite_10 and the inclusion of graphics not related to the data @cite_20 . These noticeable aesthetic design choices @cite_39 and additional layers of annotative and embellishment may contribute to positive first impressions with an information graphic @cite_17 , and there is evidence to suggest that they increase reader comprehension @cite_28 and memorability @cite_34 @cite_25 @cite_2 . Hullman and Adar @cite_31 argue that while some embellishments make information graphics more difficult to interpret, their judicious application may help readers comprehend and recall content in some cases.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_33", "@cite_36", "@cite_28", "@cite_3", "@cite_39", "@cite_2", "@cite_31", "@cite_34", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "1898893341", "2027855569", "2168629730", "137863291" ], "abstract": [ "While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88 of the infographics and 71 of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.", "For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview , an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system “in the wild”, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of “exploring” a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview 's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.", "In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces “divided attention”, and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.", "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable." ] }
1908.07106
2969508838
A generalized @math puzzle' consists of an @math numbered grid, with one missing number. A move in the game switches the position of the empty square with the position of one of its neighbors. We solve Diaconis' 15 puzzle problem' by proving that the asymptotic total variation mixing time of the board is at least order @math when the board is given periodic boundary conditions and when random moves are made. We demonstrate that for any @math with @math , the number of fixed points after @math moves converges to a Poisson distribution of parameter 1. The order of total variation mixing time for this convergence is @math without cut-off. We also prove an upper bound of order @math for the total variation mixing time.
Much of the prior work regarding @math puzzles has focused on sorting strategies for a puzzle in general position, and on the hardness of finding shortest sorting algorithms. For instance, it is known that any @math puzzle may be returned to sorted order in order @math steps (and fewer are not always possible) @cite_14 , and that finding the shortest solution is NP-hard @cite_6 , @cite_13 .
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_6" ], "mid": [ "2952149393", "1726079515", "2567804199", "1989921461" ], "abstract": [ "In this paper we study noisy sorting without re-sampling. In this problem there is an unknown order @math for all pairs @math , where @math is a constant and @math for all @math and @math . It is assumed that the errors are independent. Given the status of the queries the goal is to find the maximum likelihood order. In other words, the goal is find a permutation @math that minimizes the number of pairs @math where @math . The problem so defined is the feedback arc set problem on distributions of inputs, each of which is a tournament obtained as a noisy perturbations of a linear order. Note that when @math and @math is large, it is impossible to recover the original order @math . It is known that the weighted feedback are set problem on tournaments is NP-hard in general. Here we present an algorithm of running time @math and sampling complexity @math that with high probability solves the noisy sorting without re-sampling problem. We also show that if @math is an optimal solution of the problem then it is close'' to the original order. More formally, with high probability it holds that @math and @math . Our results are of interest in applications to ranking, such as ranking in sports, or ranking of search items based on comparisons by experts.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones.", "An important question in theoretical computer science is to determine the best possible running time for solving a problem at hand. For geometric optimization problems, we often understand their complexity on a rough scale, but not very well on a finer scale. One such example is the two-dimensional knapsack problem for squares. There is a polynomial time (1 + ϵ)-approximation algorithm for it (i.e., a PTAS) but the running time of this algorithm is triple exponential in 1 ϵ, i.e., Ω(n221 ϵ). A double or triple exponential dependence on 1 ϵ is inherent in how this and several other algorithms for other geometric problems work. In this paper, we present an EPTAS for knapsack for squares, i.e., a (1+ϵ)-approximation algorithm with a running time of Oϵ(1)·nO(1). In particular, the exponent of n in the running time does not depend on ϵ at all! Since there can be no FPTAS for the problem (unless P = NP) this is the best kind of approximation scheme we can hope for. To achieve this improvement, we introduce two new key ideas: We present a fast method to guess the Ω(221 ϵ) relatively large squares of a suitable near-optimal packing instead of using brute-force enumeration. Secondly, we introduce an indirect guessing framework to define sizes of cells for the remaining squares. In the previous PTAS each of these steps needs a running time of Ω(n221 ϵ) and we improve both to Oϵ(1) · nO(1). We complete our result by giving an algorithm for two-dimensional knapsack for rectangles under (1 + ϵ)-resource augmentation. In this setting, we also improve the best known running time of Ω(n1 ϵ1 ϵ) to Oϵ(1) · nO(1) and compute even a solution with optimal profit, in contrast to the best previously known polynomial time algorithm for this setting that computes only an approximation. We believe that our new techniques have the potential to be useful for other settings as well.", "Given @math elements with non-negative integer weights @math and an integer capacity @math , we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most @math . We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error @math ). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using read-once branching programs. Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows." ] }
1908.07106
2969508838
A generalized @math puzzle' consists of an @math numbered grid, with one missing number. A move in the game switches the position of the empty square with the position of one of its neighbors. We solve Diaconis' 15 puzzle problem' by proving that the asymptotic total variation mixing time of the board is at least order @math when the board is given periodic boundary conditions and when random moves are made. We demonstrate that for any @math with @math , the number of fixed points after @math moves converges to a Poisson distribution of parameter 1. The order of total variation mixing time for this convergence is @math without cut-off. We also prove an upper bound of order @math for the total variation mixing time.
A host of random walks on permutation groups have been studied via a variety of techniques, including the representation theory @cite_8 , @cite_15 , @cite_10 , and couplings @cite_11 . Our method for the upper bound, which uses a three transitive group action, is based on @cite_9 . So far as we are aware, the method of using renewal theory together with a local limit theorem to prove the lower bound, is new. The comparison techniques used in the proof of Theorem are partly based on @cite_17 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_17", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "1922672875", "2026924428", "2963409322", "1967591194" ], "abstract": [ "We prove a conjecture raised by the work of Diaconis and Shahshahani (1981) about the mixing time of random walks on the permutation group induced by a given conjugacy class. To do this we exploit a connection with coalescence and fragmentation processes and control the Kantorovitch distance by using a variant of a coupling due to Oded Schramm. Recasting our proof in the language of Ricci curvature, our proof establishes the occurrence of a phase transition, which takes the following form in the case of random transpositions: at time @math , the curvature is asymptotically zero for @math and is strictly positive for @math .", "where (an)n ENo is some sequence of nonnegative numbers, (Sn),nENo is the sequence of partial sums, S0 = 0, Sn = XflXk, of another sequence (Xk)kEN of i.i.d. random variables, and A c R is a fixed Borel set such as [0,1] or [0, oo). Examples of such convolution series are subordinated distributions (f=0Oan = 1) which arise as distributions of random sums, and harmonic and ordinary renewal measures (a0 = 0, an = 1 n for all n C N in the first, an = 1 for all n C NO in the second case). These examples are in turn essential for the analysis of the large time behaviour of diverse applied models such as branching and queueing processes, they are also of interest in connection with representation theorems such as the Levy representation of infinitely divisible distributions. A traditional approach to such problems is via regular variation: If the underlying random variables are nonnegative we can use Laplace transforms and the related Abelian and Tauberian theorems [see, e.g., Stam (1973) in the context of subordination and Feller (1971, XIV.3) in connection with renewal theory; Embrechts, Maejima, and Omey (1984) is a recent treatment of generalized renewal measures along these lines]. The approach of the present paper is based on the Wiener-Levy-Gel'fand theorem and has occasionally been called the Banach algebra method. In Gruibel (1983) we gave a new variant of this method for the special case of lattice distributions, showing that by using the appropriate Banach algebras of sequences, arbitrarily fine expansions are possible under certain assumptions on the higher-order differences of (P(X1 = n))fnEN. Here we give a corresponding treatment of nonlattice distributions. We restrict ourselves to an analogue of first-order differences and obtain a number of theorems which perhaps are described best as next-term results. To explain this let us consider a special case in more detail.", "We study the random walk on the symmetric group (S_n ) generated by the conjugacy class of cycles of length k. We show that the convergence to uniform measure of this walk has a cut-off in total variation distance after ( n k n ) steps, uniformly in (k = o(n) ) as (n ). The analysis follows from a new asymptotic estimation of the characters of the symmetric group evaluated at cycles.", "We prove a law of large numbers for a class of ballistic, multidimensional random walks in random environments where the environment satisfies appropriate mixing conditions, which hold when the environment is a weak mixing field in the sense of Dobrushin and Shlosman. Our result holds if the mixing rate balances moments of some random times depending on the path. It applies in the nonnestling case, but we also provide examples of nestling walks that satisfy our assumptions. The derivation is based on an adaptation, using coupling, of the regeneration argument of Sznitman and Zerner." ] }
1908.07085
2971040543
We present a learning-based method to estimate the object bounding box from its 2D bird's-eye view (BEV) LiDAR points. Our method, entitled BoxNet, exploits a simple deep neural network that can efficiently handle unordered points. The method takes as input the 2D coordinates of all the points and the output is a vector consisting of both the box pose (position and orientation in LiDAR coordinate system) and its size (width and length). In order to deal with the angle discontinuity problem, we propose to estimate the double-angle sinusoidal values rather than the angle itself. We also predict the center relative to the point cloud mean to boost the performance of estimating the location of the box. The proposed method does not rely on the ordering of points as in many existing approaches, and can accurately predict the actual size of the bounding box based on the prior information that is obtained from the training data. BoxNet is validated using the KITTI 3D object dataset, with significant improvement compared with the state-of-the-art non-learning based methods.
We omit the extensive literature review of 3D object detection methods, in which bounding box regression is considered as an essential component. Our proposed method only focuses on the 2D case and does not target an end-to-end solution. Therefore, it is in fact an intermediate step in the pipeline of 3D object detection. The goal is to provide accurate object bounding boxes for subsequent navigation and planning tasks. From this perspective, the problem defined in this paper is closely related to the L-shape fitting of 2D laser scanner data (or BEV LiDAR point cloud) in modeling a vehicle @cite_14 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "1946609740", "2560544142", "2950382845", "2415454270" ], "abstract": [ "Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D.", "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors and sub-category detection. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset.", "Convolutional network techniques have recently achieved great success in vision based detection tasks. This paper introduces the recent development of our research on transplanting the fully convolutional network technique to the detection tasks on 3D range scan data. Specifically, the scenario is set as the vehicle detection task from the range data of Velodyne 64E lidar. We proposes to present the data in a 2D point map and use a single 2D end-to-end fully convolutional network to predict the objectness confidence and the bounding boxes simultaneously. By carefully design the bounding box encoding, it is able to predict full 3D bounding boxes even using a 2D convolutional network. Experiments on the KITTI dataset shows the state-of-the-art performance of the proposed method." ] }
1908.06550
2964221281
Abstract In two earlier papers we derived congruence formats with regard to transition system specifications for weak semantics on the basis of a decomposition method for modal formulas. The idea is that a congruence format for a semantics must ensure that the formulas in the modal characterisation of this semantics are always decomposed into formulas that are again in this modal characterisation. The stability and divergence requirements that are imposed on many of the known weak semantics have so far been outside the realm of this method. Stability refers to the absence of a τ-transition. We show, using the decomposition method, how congruence formats can be relaxed for weak semantics that are stability-respecting. This relaxation for instance brings the priority operator within the range of the stability-respecting branching bisimulation format. Divergence, which refers to the presence of an infinite sequence of τ-transitions, escapes the inductive decomposition method. We circumvent this problem by proving that a congruence format for a stability-respecting weak semantics is also a congruence format for its divergence-preserving counterpart.
In @cite_30 @cite_0 he employs (OSOS) TSSs @cite_4 . An OSOS TSS allows no negative premises, but includes priorities between rules: @math means that @math can only be applied if @math cannot. An OSOS specification can be seen as, or translated into, a GSOS specification with negative premises. Each rule @math with exactly one higher-priority rule @math is replaced by a number of rules, one for each (positive) premise of @math ; in the copy of @math , this premise is negated. For a rule @math with multiple higher-priority rules @math , this replacement is carried out for each such @math .
{ "cite_N": [ "@cite_30", "@cite_0", "@cite_4" ], "mid": [ "2114575345", "2952621790", "2943842266", "2514431540" ], "abstract": [ "The focus of this paper is on the public communication required for generating a maximal-rate secret key (SK) within the multiterminal source model of Csiszar and Narayan. Building on the prior work of Tyagi for the two-terminal scenario, we derive a lower bound on the communication complexity, @math , defined to be the minimum rate of public communication needed to generate a maximal-rate SK. It is well known that the minimum rate of communication for omniscience, denoted by @math , is an upper bound on @math . For the class of pairwise independent network (PIN) models defined on uniform hypergraphs, we show that a certain Type @math condition, which is verifiable in polynomial time, guarantees that our lower bound on @math meets the @math upper bound. Thus, the PIN models satisfying our condition are @math -maximal, indicating that the upper bound @math holds with equality. This allows us to explicitly evaluate @math for such PIN models. We also give several examples of PIN models that satisfy our Type @math condition. Finally, we prove that for an arbitrary multiterminal source model, a stricter version of our Type @math condition implies that communication from all terminals (omnivocality) is needed for establishing an SK of maximum rate. For three-terminal source models, the converse is also true: omnivocality is needed for generating a maximal-rate SK only if the strict Type @math condition is satisfied. However, for the source models with four or more terminals, counterexamples exist showing that the converse does not hold in general.", "Let @math be a nontrivial @math -ary predicate. Consider a random instance of the constraint satisfaction problem @math on @math variables with @math constraints, each being @math applied to @math randomly chosen literals. Provided the constraint density satisfies @math , such an instance is unsatisfiable with high probability. The problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate @math supports a @math - probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree @math (which runs in time @math ) refute a random instance of @math . In particular, the polynomial-time SOS algorithm requires @math constraints to refute random instances of CSP @math when @math supports a @math -wise uniform distribution on its satisfying assignments. Together with recent work of [LRS15], our result also implies that polynomial-size semidefinite programming relaxation for refutation requires at least @math constraints. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate @math , they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of [AOW15] and [RRS16], this full three-way tradeoff is , up to lower-order factors.", "Given a graph @math and a subset @math of terminals, a of @math is a tree that spans @math . In the vertex-weighted Steiner tree (VST) problem, each vertex is assigned a non-negative weight, and the goal is to compute a minimum weight Steiner tree of @math . We study a natural generalization of the VST problem motivated by multi-level graph construction, the (V-GSST), which can be stated as follows: given a graph @math and terminals @math , where each terminal @math requires a facility of a minimum grade of service @math , compute a Steiner tree @math by installing facilities on a subset of vertices, such that any two vertices requiring a certain grade of service are connected by a path in @math with the minimum grade of service or better. Facilities of higher grade are more costly than facilities of lower grade. Multi-level variants such as this one can be useful in network design problems where vertices may require facilities of varying priority. While similar problems have been studied in the edge-weighted case, they have not been studied as well in the more general vertex-weighted case. We first describe a simple heuristic for the V-GSST problem whose approximation ratio depends on @math , the number of grades of service. We then generalize the greedy algorithm of [Klein & Ravi, 1995] to show that the V-GSST problem admits a @math -approximation, where @math is the set of terminals requiring some facility. This result is surprising, as it shows that the (seemingly harder) multi-grade problem can be approximated as well as the VST problem, and that the approximation ratio does not depend on the number of grades of service.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network." ] }
1908.06550
2964221281
Abstract In two earlier papers we derived congruence formats with regard to transition system specifications for weak semantics on the basis of a decomposition method for modal formulas. The idea is that a congruence format for a semantics must ensure that the formulas in the modal characterisation of this semantics are always decomposed into formulas that are again in this modal characterisation. The stability and divergence requirements that are imposed on many of the known weak semantics have so far been outside the realm of this method. Stability refers to the absence of a τ-transition. We show, using the decomposition method, how congruence formats can be relaxed for weak semantics that are stability-respecting. This relaxation for instance brings the priority operator within the range of the stability-respecting branching bisimulation format. Divergence, which refers to the presence of an infinite sequence of τ-transitions, escapes the inductive decomposition method. We circumvent this problem by proving that a congruence format for a stability-respecting weak semantics is also a congruence format for its divergence-preserving counterpart.
If patience rules are not allowed to have a lower priority than other rules, then the (r)bbo format, upon translation from OSOS to GSOS, can be seen as a subformat of our (rooted) stability-respecting branching bisimulation format. The basic idea is that in the rbbo format all arguments of so-called @math -preserving function symbols @cite_0 , which are the only ones allowed to occur in targets, are declared @math -liquid; in the bbo format, all arguments of function symbols are declared @math -liquid. Moreover, all arguments of function symbols that occur as the left-hand side of a positive premise are declared @math -liquid. Patience rules are in the (r)bbo format however, under strict conditions, allowed to be dominated by other rules, which in our setting gives rise to patience rules with negative premises. This is outside the realm of our rooted stability-respecting branching bisimulation format. On the other hand, the TSSs of the process algebra BPA @math , the binary Kleene star and deadlock testing (see @cite_19 @cite_5 ), for which rooted convergent branching bisimulation equivalence is a congruence, are outside the rbbo format but within the rooted stability-respecting branching bisimulation format.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_5" ], "mid": [ "2041561080", "1966112122", "1712407203", "1537995368" ], "abstract": [ "In this study, we present rule formats for four main notions of bisimulation with silent moves. Weak bisimulation is a congruence for any process algebra defined by WB cool rules; we have similar results for rooted weak bisimulation (Milner''s observational equivalence''''), branching bisimulation, and rooted branching bisimulation. The theorems stating that, say, observational equivalence is an appropriate notion of equality for CCS are corollaries of the results of this paper. We also give sufficient conditions under which equational axiom systems can be generated from operational rules. Indeed, many equational axiom systems appearing in the literature are instances of this general theory.", "In the concurrent language CCS, two programs are considered the same if they are bisimilar . Several years and many researchers have demonstrated that the theory of bisimulation is mathematically appealing and useful in practice. However, bisimulation makes too many distinctions between programs. We consider the problem of adding operations to CCS to make bisimulation fully abstract. We define the class of GSOS operations, generalizing the style and technical advantages of CCS operations. We characterize GSOS congruence in as a bisimulation-like relation called ready-simulation . Bisimulation is strictly finer than ready simulation, and hence not a congruence for any GSOS language.", "This paper describes two new bisimulation equivalences for the pure untyped call-by-value spl lambda -calculus, called enf bisimilarity and enf bisimilarity up to spl eta . They are based on eager reduction of terms to eager normal form (enf), analogously to co-inductive bisimulation characterizations of Levy-Longo tree equivalence and Bohm tree equivalence (up to spl eta ). We argue that enf bisimilarity is the call-by-value analogue of Levy-Longo tree equivalence. Enf bisimilarity (up to spl eta ) is the congruence on source terms induced by the call-by-value CPS transform and Bohm tree equivalence (up to spl eta ) on target terms. Enf bisimilarity and enf bisimilarity up to spl eta enjoy powerful bisimulation proof principles which, among other things, can be used to establish a retraction theorem for the call-by-value CPS transform.", "Abstract On the basis of an operational bisimulation account of Bohm tree equivalence, a novel operationally-based development of the Bohm tree theory is presented, including an elementary congruence proof for Bohm tree equivalence. The approach is also applied to other sensible and lazy tree theories. Finally, a syntactic proof principle, called bisimulation up to context, is derived from the congruence proofs. It is used to give a simple syntactic proof of the least fixed point property of fixed point combinators. The paper surveys notions of bisimulation and trees for sensible λ-theories based on reduction to head normal forms as well as for lazy λ-theories based on weak head normal forms." ] }
1908.06560
2967118052
Until now, researchers have proposed several novel heterogeneous defect prediction HDP methods with promising performance. To the best of our knowledge, whether HDP methods can perform significantly better than unsupervised methods has not yet been thoroughly investigated. In this article, we perform a replication study to have a holistic look in this issue. In particular, we compare state-of-the-art five HDP methods with five unsupervised methods. Final results surprisingly show that these HDP methods do not perform significantly better than some of unsupervised methods (especially the simple unsupervised methods proposed by ) in terms of two non-effort-aware performance measures and four effort-aware performance measures. Then, we perform diversity analysis on defective modules via McNemar's test and find the prediction diversity is more obvious when the comparison is performed between the HDP methods and the unsupervised methods than the comparisons only between the HDP methods or between the unsupervised methods. This shows the HDP methods and the unsupervised methods are complementary to each other in identifying defective models to some extent. Finally, we investigate the feasibility of five HDP methods by considering two satisfactory criteria recommended by previous CPDP studies and find the satisfactory ratio of these HDP methods is still pessimistic. The above empirical results implicate there is still a long way for heterogeneous defect prediction to go. More effective HDP methods need to be designed and the unsupervised methods should be considered as baselines.
Researchers conducted a set of empirical studies to investigate the feasibility of CPDP by considering real-world software projects. @cite_75 analyzed 12 real-world projects from open-source communities and Microsoft corporation. After running 622 cross-project predictions, they found only 3.4 , @cite_65 analyzed another 10 open-source projects. After running 160,586 cross-project predictions, they found only 0.32 , @cite_1 @cite_42 still found the unsatisfactory performance of CPDP in the context of just-in-time (change-level) software defect prediction @cite_57 . @cite_48 investigated the feasibility of CPDP in terms of effort-aware performance measures (i.e., taking the limitation of available SQA resources into consideration). They surprisingly found that the performance of CPDP is no worse than WPDP and significantly better than the random method. Turhan @cite_9 summarized the reasons of poor performance of CPDP via dataset shift concept. He classified different forms of dataset shift into 6 categories, such as covariate shift, prior probability shift. His analysis forms the theoretical support for the follow-up studies on CPDP.
{ "cite_N": [ "@cite_48", "@cite_9", "@cite_42", "@cite_65", "@cite_1", "@cite_57", "@cite_75" ], "mid": [ "2360967250", "2036055923", "2127623179", "2008596407" ], "abstract": [ "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models. To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs). Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7 in precision, 11.5 in recall, and 14.2 in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9 in F1.", "Software defect prediction has been a popular research topic in recent years and is considered as a means for the optimization of quality assurance activities. Defect prediction can be done in a within-project or a cross-project scenario. The within-project scenario produces results with a very high quality, but requires historic data of the project, which is often not available. For the cross-project prediction, the data availability is not an issue as data from other projects is readily available, e.g., in repositories like PROMISE. However, the quality of the defect prediction results is too low for practical use. Recent research showed that the selection of appropriate training data can improve the quality of cross-project defect predictions. In this paper, we propose distance-based strategies for the selection of training data based on distributional characteristics of the available data. We evaluate the proposed strategies in a large case study with 44 data sets obtained from 14 open source projects. Our results show that our training data selection strategy improves the achieved success rate of cross-project defect predictions significantly. However, the quality of the results still cannot compete with within-project defect prediction.", "There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian networks to determine the probabilistic influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, we define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We extract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, we learn the marginal defect proneness probability of the whole software system, the set of most effective metrics, and the influential relationships among metrics and defectiveness. Our experiments on nine open source Promise data repository data sets show that response for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most effective metrics whereas coupling between objects (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less effective metrics on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are untrustworthy. On the other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, we observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness. However, further investigation involving a greater number of projects is needed to confirm our findings.", "Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70 and a Precision greater than 50 ; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project." ] }
1908.06709
2968767118
In automatic speech recognition, often little training data is available for specific challenging tasks, but training of state-of-the-art automatic speech recognition systems requires large amounts of annotated speech. To address this issue, we propose a two-staged approach to acoustic modeling that combines noise and reverberation data augmentation with transfer learning to robustly address challenges such as difficult acoustic recording conditions, spontaneous speech, and speech of elderly people. We evaluate our approach using the example of German oral history interviews, where a relative average reduction of the word error rate by 19.3 is achieved.
Applying data augmentation to training data is a common approach to increase the amount of training data in order to improve the robustness of a model. In ASR it can be used, e.g., to apply multi-condition training, when no real data in the desired condition is available. Data augmentation is, however, limited to acoustic effects that can be created in a sufficiently realistic manner - such as additive noise and reverberation. The data augmentation of reverberant speech for state-of-the-art LF-MMI models has been studied by @cite_15 . Several speed perturbation techniques to increase the training data variance have been investigated by @cite_20 . The proposed method in this work is to increase the data three-fold by creating two additional versions of each signal using the constant speed factors @math and @math ---a method that is used in many recent Kaldi training routines by default.
{ "cite_N": [ "@cite_15", "@cite_20" ], "mid": [ "2407080277", "2953450277", "2963552443", "2963543962" ], "abstract": [ "Data augmentation is a common strategy adopted to increase the quantity of training data, avoid overfitting and improve robustness of the models. In this paper, we investigate audio-level speech augmentation methods which directly process the raw signal. The method we particularly recommend is to change the speed of the audio signal, producing 3 versions of the original signal with speed factors of 0.9, 1.0 and 1.1. The proposed technique has a low implementation cost, making it easy to adopt. We present results on 4 different LVCSR tasks with training data ranging from 100 hours to 1000 hours, to examine the effectiveness of audio augmentation in a variety of data scenarios. An average relative improvement of 4.3 was observed across the 4 tasks.", "Data augmentation (DA) is commonly used during model training, as it significantly improves test error and model robustness. DA artificially expands the training set by applying random noise, rotations, crops, or even adversarial perturbations to the input data. Although DA is widely used, its capacity to provably improve robustness is not fully understood. In this work, we analyze the robustness that DA begets by quantifying the margin that DA enforces on empirical risk minimizers. We first focus on linear separators, and then a class of nonlinear models whose labeling is constant within small convex hulls of data points. We present lower bounds on the number of augmented data points required for non-zero margin, and show that commonly used DA techniques may only introduce significant margin after adding exponentially many points to the data set.", "Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm --- generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above --- the results also show that our approach produces better classification results than similar GAN models.", "Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches." ] }
1908.06709
2968767118
In automatic speech recognition, often little training data is available for specific challenging tasks, but training of state-of-the-art automatic speech recognition systems requires large amounts of annotated speech. To address this issue, we propose a two-staged approach to acoustic modeling that combines noise and reverberation data augmentation with transfer learning to robustly address challenges such as difficult acoustic recording conditions, spontaneous speech, and speech of elderly people. We evaluate our approach using the example of German oral history interviews, where a relative average reduction of the word error rate by 19.3 is achieved.
Transfer learning is an approach used to transfer knowledge of a model trained in one scenario to train a model in another related scenario to improve generalization and performance @cite_14 . It is particularly useful in scenarios where only little training data is available for the main task but a large amount of annotated speech is available for a similar or related task. A detailed overview of transfer learning in speech and language processing is given by @cite_10 . Transfer learning for ASR systems using LF-MMI models has been studied by @cite_3 for many different common English speech recognition tasks.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_3" ], "mid": [ "2963522845", "2278264165", "2745420784", "1603035390" ], "abstract": [ "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of ‘model adaptation’. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the ‘transfer’ can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field1.", "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field.", "Transfer learning borrows knowledge from a source domain to facilitate learning in a target domain. Two primary issues to be addressed in transfer learning are what and how to transfer. For a pair of domains, adopting different transfer learning algorithms results in different knowledge transferred between them. To discover the optimal transfer learning algorithm that maximally improves the learning performance in the target domain, researchers have to exhaustively explore all existing transfer learning algorithms, which is computationally intractable. As a trade-off, a sub-optimal algorithm is selected, which requires considerable expertise in an ad-hoc way. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we first learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer for a newly arrived pair of domains by optimizing the reflection function. Extensive experiments demonstrate the L2T's superiority over several state-of-the-art transfer learning algorithms and its effectiveness on discovering more transferable knowledge.", "Transfer learning aims at reusing the knowledge in some source tasks to improve the learning of a target task. Many transfer learning methods assume that the source tasks and the target task be related, even though many tasks are not related in reality. However, when two tasks are unrelated, the knowledge extracted from a source task may not help, and even hurt, the performance of a target task. Thus, how to avoid negative transfer and then ensure a \"safe transfer\" of knowledge is crucial in transfer learning. In this paper, we propose an Adaptive Transfer learning algorithm based on Gaussian Processes (AT-GP), which can be used to adapt the transfer learning schemes by automatically estimating the similarity between a source and a target task. The main contribution of our work is that we propose a new semi-parametric transfer kernel for transfer learning from a Bayesian perspective, and propose to learn the model with respect to the target task, rather than all tasks as in multi-task learning. We can formulate the transfer learning problem as a unified Gaussian Process (GP) model. The adaptive transfer ability of our approach is verified on both synthetic and real-world datasets." ] }
1908.06570
2966882610
First, a new perspective based on binary matrices of placement delivery array (PDA) design was introduced, by which the PDA design problem can be simplified. From this new perspective, and based on some families of combinatorial designs, new schemes with low subpacketization for centralized coded caching problem were constructed. We also give a technique for constructing new coded caching scheme from known schemes based on direct product of PDAs.
Although optimal in rate, the caching scheme in @cite_22 has its limitation in practical implementations: By this caching scheme, each file is divided into @math packets @math The number @math is also referred to as the file size or subpacketization in some literature. @math , which grows exponentially with @math @cite_9 . For practical application, it is important to construct coded caching scheme with smaller packet number.
{ "cite_N": [ "@cite_9", "@cite_22" ], "mid": [ "2517585112", "2915582513", "2975151417", "2963935351" ], "abstract": [ "We study a noiseless broadcast link serving @math users whose requests arise from a library of @math files. Every user is equipped with a cache of size @math files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most @math file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file @math scales to infinity. The asymptotic coding gain obtained is roughly @math . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets @math is a function of the system parameters @math , and @math . Specifically, we show that the existing random placement and clique cover delivery schemes that achieve optimality in the asymptotic regime can have at most a multiplicative gain of 2 even if the number of packets is exponential in the asymptotic gain @math . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of @math is at least @math . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters.", "Coded caching scheme recently has become quite popular in the wireless network, since the maximum transmission amount @math reduces effectively during the peak-traffic times. To realize a coded caching scheme, each file must be divided into @math packets, which usually increases the computation complexity of a coded caching scheme. So we prefer to design a scheme with @math and @math as small as possible in practice. However, there exists a tradeoff between @math and @math . In this paper, we generalize the schemes constructed by (IEEE Transactions on Information Theory, 64, 5755–5766, 2018) and (IEEE Transactions on Information Theory 63, 5821–5833, 2017), respectively. These two classes of schemes have a wider range of application due to the more flexible memory size than the original ones. By comparing with the previous known deterministic schemes, our new schemes have advantages on @math or @math .", "Caching is a promising solution to satisfy the ever-increasing demands for the multi-media traffics. In caching networks, coded caching is a recently proposed technique that achieves significant performance gains over the uncoded caching schemes. However, to implement the coded caching schemes, each file has to be split into @math packets, which usually increases exponentially with the number of users @math . Thus, designing caching schemes that decrease the order of @math is meaningful for practical implementations. In this paper, by reviewing the Ali-Niesen caching scheme, the placement delivery array (PDA) design problem is first formulated to characterize the placement issue and the delivery issue with a single array. Moreover, we show that, through designing appropriate PDA, new centralized coded caching schemes can be discovered. Second, it is shown that the Ali-Niesen scheme corresponds to a special class of PDA, which realizes the best coding gain with the least @math . Third, we present a new construction of PDA for the centralized coded caching system, wherein the cache size @math at each user (identical cache size is assumed at all users) and the number of files @math satisfies @math or @math ( @math is an integer, such that @math ). The new construction can decrease the required @math from the order @math of Ali-Niesen scheme to @math or @math , respectively, while the coding gain loss is only 1.", "We consider a cache-aided interference network which consists of a library of N files, K<sub>T</sub> transmitters and K<sub>R</sub> receivers (users), each equipped with a local cache of size M<sub>T</sub> and M<sub>R</sub> files respectively, and connected via a discrete-time additive white Gaussian noise channel. Each receiver requests an arbitrary file from the library. The objective is to design a cache placement without knowing the receivers' requests and a communication scheme such that the sum Degrees of Freedom (sum-DoF) of the delivery is maximized. This network model has been investigated by , who proposed a prefetching and a delivery scheme that achieve a sum-DoF of min M<sub>T</sub> K<sub>T</sub> + K<sub>R</sub> M<sub>R</sub>/N, K<sub>R</sub> . One of the biggest limitations of this scheme is the requirement of high subpacketization level. This paper attempts to design new algorithms to reduce the file subpacketization in such a network. In particular, we propose a new approach for both prefetching and linear delivery based on a combinatorial design called hypercube. We show that the required number of packets per file can be exponentially reduced compared to the state-of-the-art scheme proposed by , or the NMA scheme. When M<sub>T</sub> K<sub>T</sub> + K<sub>R</sub> M<sub>R</sub> ≤ K<sub>R</sub>, the achievable one-shot sum-DoF using this approach is M<sub>T</sub> K<sub>T</sub> + K<sub>R</sub> M<sub>R</sub>/N, which shows that 1) the one-shot sum-DoF scales linearly with the aggregate cache size in the network and 2) it is within a factor of 2 to the information-theoretic optimum. Surprisingly, the identical and near optimal sum-DoF performance can be achieved using the hypercube approach with a much less file subpacketization." ] }
1908.06570
2966882610
First, a new perspective based on binary matrices of placement delivery array (PDA) design was introduced, by which the PDA design problem can be simplified. From this new perspective, and based on some families of combinatorial designs, new schemes with low subpacketization for centralized coded caching problem were constructed. We also give a technique for constructing new coded caching scheme from known schemes based on direct product of PDAs.
So far, several coded caching schemes with reduced file size have been constructed, all with the sacrifice of increasing the rate. In @cite_2 , a class of coded caching schemes with linear file size @math i.e., @math were constructed from Ruzsa-Szem @math redi graphs. A very interesting framework for constructing centralized coded caching scheme, named design (or PDA design for simplicity), was introduced in @cite_14 , and some new classes of coded caching schemes were constructed by PDA design in references @cite_14 and @cite_6 . In @cite_5 , two classes of constant rate caching schemes with sub-exponential subpacketization were obtained from @math -free @math -partite hypergraphs. A more general class of coded caching schemes were constructed in @cite_7 from strong edge colored bipartite graphs. References @cite_1 and @cite_0 constructed some coded caching schemes using projective geometries over finite fields, and reference @cite_18 constructed some schemes based on resolvable combinatorial designs from certain linear block codes. Some classes of coded caching schemes from some other block designs, including balanced incomplete block designs (BIBDs), @math -designs and transversal designs (TDs), are obtained in @cite_17 . Summaries of known centralized coded caching schemes can be found in @cite_6 @cite_5 and @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2975151417", "2963720294", "2539811847", "2946479852" ], "abstract": [ "Caching is a promising solution to satisfy the ever-increasing demands for the multi-media traffics. In caching networks, coded caching is a recently proposed technique that achieves significant performance gains over the uncoded caching schemes. However, to implement the coded caching schemes, each file has to be split into @math packets, which usually increases exponentially with the number of users @math . Thus, designing caching schemes that decrease the order of @math is meaningful for practical implementations. In this paper, by reviewing the Ali-Niesen caching scheme, the placement delivery array (PDA) design problem is first formulated to characterize the placement issue and the delivery issue with a single array. Moreover, we show that, through designing appropriate PDA, new centralized coded caching schemes can be discovered. Second, it is shown that the Ali-Niesen scheme corresponds to a special class of PDA, which realizes the best coding gain with the least @math . Third, we present a new construction of PDA for the centralized coded caching system, wherein the cache size @math at each user (identical cache size is assumed at all users) and the number of files @math satisfies @math or @math ( @math is an integer, such that @math ). The new construction can decrease the required @math from the order @math of Ali-Niesen scheme to @math or @math , respectively, while the coding gain loss is only 1.", "The centralized coded caching scheme is a technique proposed by Maddah-Ali and Niesen as a method to reduce the network burden in peak times in a wireless network system. reformulate the problem as designing a corresponding placement delivery array and propose two new schemes from this perspective. These schemes significantly reduce the rate compared with the uncoded caching schemes. However, to implement these schemes, each file should be cut into @math pieces, where @math grows exponentially with the number of users @math . Such a constraint is obviously infeasible in the practical setting, especially when @math is large. Thus, it is desirable to design caching schemes with constant rate @math (independent of @math ) as well as smaller @math . In this paper, we view the centralized coded caching problem in a hypergraph perspective and show that designing a feasible placement delivery array is equivalent to constructing a linear and (6,3)-free 3-uniform 3-partite hypergraph. Several new results and constructions arise from our novel point of view. First, by using the famous (6,3)-theorem in extremal graph theory, we show that constant rate placement delivery arrays with @math growing linearly with @math do not exist. Second, we present two infinite classes of placement delivery arrays to show that constant rate caching schemes with @math growing sub-exponentially with @math do exist.", "The technique of coded caching proposed by Madddah-Ali and Niesen is a promising approach to alleviate the load of networks during peak-traffic times. Recently, placement delivery array (PDA) was presented to characterize both the placement and delivery phase in a single array for the centralized coded caching algorithm. In this letter, we reinterpret PDA from a new perspective, i.e., the strong edge coloring of bipartite graph (bigraph). We prove that, a PDA is equivalent to a strong edge colored bigraph. Thus, we can construct a class of PDAs from existing structures in bigraphs. The class subsumes the scheme proposed by Maddah- and a more general class of PDAs proposed by as special cases.", "Coded caching scheme, which is an effective technique to increase the transmission efficiency during peak traffic times, has recently become quite popular among the coding community. Generally rate can be measured to the transmission in the peak traffic times, i.e., this efficiency increases with the decreasing of rate. In order to implement a coded caching scheme, each file in the library must be split in a certain number of packets. And this number directly reflects the complexity of a coded caching scheme, i.e., the complexity increases with the increasing of the packet number. However there exists a tradeoff between the rate and packet number. So it is meaningful to characterize this tradeoff and design the related Pareto-optimal coded caching schemes with respect to both parameters. Recently, a new concept called placement delivery array (PDA) was proposed to characterize the coded caching scheme. However as far as we know no one has yet proved that one of the previously known PDAs is Pareto-optimal. In this paper, we first derive two lower bounds on the rate under the framework of PDA. Consequently, the PDA proposed by Maddah-Ali and Niesen is Pareto-optimal, and a tradeoff between rate and packet number is obtained for some parameters. Then, from the above observations and the view point of combinatorial design, two new classes of Pareto-optimal PDAs are obtained. Based on these PDAs, the schemes with low rate and packet number are obtained. Finally the performance of some previously known PDAs are estimated by comparing with these two classes of schemes." ] }
1908.06209
2968708664
Energy minimization methods are a classical tool in a multitude of computer vision applications. While they are interpretable and well-studied, their regularity assumptions are difficult to design by hand. Deep learning techniques on the other hand are purely data-driven, often provide excellent results, but are very difficult to constrain to predefined physical or safety-critical models. A possible combination between the two approaches is to design a parametric energy and train the free parameters in such a way that minimizers of the energy correspond to desired solution on a set of training examples. Unfortunately, such formulations typically lead to bi-level optimization problems, on which common optimization algorithms are difficult to scale to modern requirements in data processing and efficiency. In this work, we present a new strategy to optimize these bi-level problems. We investigate surrogate single-level problems that majorize the target problems and can be implemented with existing tools, leading to efficient algorithms without collapse of the energy function. This framework of strategies enables new avenues to the training of parameterized energy minimization models from large data.
The straightforward way of optimizing bi-level problems is to consider @cite_49 @cite_2 @cite_57 . These methods directly differentiate the higher-level loss function with respect to the minimizing argument and descend in the direction of this gradient. An incomplete list of examples in image processing is @cite_29 @cite_23 @cite_7 @cite_94 @cite_100 @cite_76 @cite_64 @cite_86 @cite_58 . This strategy requires both the higher- and lower-level problems to be smooth and the minimizing map to be invertible. This is usually facilitated by implicit differentiation, as discussed in @cite_69 @cite_60 @cite_94 @cite_23 . In more generality, the problem of directly minimizing @math without assuming that smoothness in @math leads to optimization problems with equilibrium constraints (MPECs), see @cite_80 for a discussion in terms of machine learning or @cite_85 @cite_41 @cite_40 and @cite_57 . This approach also applies to the optimization layers of @cite_79 , which lend themselves well to a reformulation as a bi-level optimization problem.
{ "cite_N": [ "@cite_64", "@cite_41", "@cite_29", "@cite_85", "@cite_2", "@cite_58", "@cite_69", "@cite_60", "@cite_23", "@cite_49", "@cite_80", "@cite_7", "@cite_57", "@cite_79", "@cite_40", "@cite_86", "@cite_76", "@cite_94", "@cite_100" ], "mid": [ "2067448964", "2164571150", "2086953401", "2059783351" ], "abstract": [ "Maximization of mutual information of voxel intensities has been demonstrated to be a very powerful criterion for three-dimensional medical image registration, allowing robust and accurate fully automated affine registration of multimodal images in a variety of applications, without the need for segmentation or other preprocessing of the images. In this paper, we investigate the performance of various optimization methods and multiresolution strategies for maximization of mutual information, aiming at increasing registration speed when matching large high-resolution images. We show that mutual information is a continuous function of the affine registration parameters when appropriate interpolation is used and we derive analytic expressions of its derivatives that allow numerically exact evaluation of its gradient. Various multiresolution gradient- and non-gradient-based optimization strategies, such as Powell, simplex, steepest-descent, conjugate-gradient, quasi-Newton and Levenberg—Marquardt methods, are evaluated for registration of computed tomography (CT) and magnetic resonance images of the brain. Speed-ups of a factor of 3 on average compared to Powell's method at full resolution are achieved with similar precision and without a loss of robustness with the simplex, conjugate-gradient and Levenberg—Marquardt method using a two-level multiresolution scheme. Large data sets such as 2562 × 128 MR and 5122 × 48 CT images can be registered with subvoxel precision in < 5 min CPU time on current workstations.", "One central issue in practically deploying network coding is the adaptive and economic allocation of network resource. We cast this as an optimization, where the net-utility-the difference between a utility derived from the attainable multicast throughput and the total cost of resource provisioning-is maximized. By employing the MAX of flows characterization of the admissible rate region for multicasting, this paper gives a novel reformulation of the optimization problem, which has a separable structure. The Lagrangian relaxation method is applied to decompose the problem into subproblems involving one destination each. Our specific formulation of the primal problem results in two key properties. First, the resulting subproblem after decomposition amounts to the problem of finding a shortest path from the source to each destination. Second, assuming the net-utility function is strictly concave, our proposed method enables a near-optimal primal variable to be uniquely recovered from a near-optimal dual variable. A numerical robustness analysis of the primal recovery method is also conducted. For ill-conditioned problems that arise, for instance, when the cost functions are linear, we propose to use the proximal method, which solves a sequence of well-conditioned problems obtained from the original problem by adding quadratic regularization terms. Furthermore, the simulation results confirm the numerical robustness of the proposed algorithms. Finally, the proximal method and the dual subgradient method can be naturally extended to provide an effective solution for applications with multiple multicast sessions", "Minimization with orthogonality constraints (e.g., (X^ X = I )) and or spherical constraints (e.g., ( x _2 = 1 )) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform—a Crank-Nicolson-like update scheme—to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842 to the best known solution on the largest problem “tai256c” in QAPLIB can be reached in 5 min on a typical laptop.", "Many database management systems support whole-image matching. However, users may only remember certain subregions of the images. In this paper, we develop Padding and Reduction Algorithms to support subimage queries of arbitrary size based on local color information. The idea is to estimate the best- case lower bound to the dissimilarity measure between the query and the image. By making use of multiresolution representation, this lower bound becomes tighter as the scale becomes finer. Because image contents are usually pre- extracted and stored, a key issue is how to determine the number of levels used in the representation. We address this issue analytically by estimating the CPU and I O costs, and experimentally by comparing the performance and accuracy of the outcomes of various filtering schemes. Our findings suggest that a 3-level hierarchy is preferred. We also study three strategies for searching multiple resolutions. Our studies indicate that the hybrid strategy with horizontal filtering on the coarse level and vertical filtering on remaining levels is the best choice when using Padding and Reduction Algorithms in the preferred 3-level multiresolution representation. The best 10 desired images can be retrieved efficiently and effectively from a collection of a thousand images in about 3.5 seconds.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only." ] }
1908.06209
2968708664
Energy minimization methods are a classical tool in a multitude of computer vision applications. While they are interpretable and well-studied, their regularity assumptions are difficult to design by hand. Deep learning techniques on the other hand are purely data-driven, often provide excellent results, but are very difficult to constrain to predefined physical or safety-critical models. A possible combination between the two approaches is to design a parametric energy and train the free parameters in such a way that minimizers of the energy correspond to desired solution on a set of training examples. Unfortunately, such formulations typically lead to bi-level optimization problems, on which common optimization algorithms are difficult to scale to modern requirements in data processing and efficiency. In this work, we present a new strategy to optimize these bi-level problems. We investigate surrogate single-level problems that majorize the target problems and can be implemented with existing tools, leading to efficient algorithms without collapse of the energy function. This framework of strategies enables new avenues to the training of parameterized energy minimization models from large data.
is a prominent strategy in applied bi-level optimization across fields, i.e. MRF literature @cite_59 @cite_10 in deep learning @cite_11 @cite_28 @cite_14 @cite_87 and in variational settings @cite_71 @cite_97 @cite_54 @cite_26 @cite_75 @cite_39 . The problem is transformed into a single level problem by choosing an optimization algorithm @math that produces an approximate solution to the lower level problem after a fixed number of iterations. @math is then replaced by @math . Automatic differentiation @cite_61 allows for an efficient evaluation of the gradient of the upper-level loss w.r.t to this reduced objective In general these strategies are very successful in practice, they combine the model and its optimization method into a single feed-forward process, where the model is again only implicitly present. Later works @cite_73 @cite_89 @cite_102 @cite_75 allow the lower-level parameters to change in between the fixed number of iterations, leading to structures that model differential equations and stray further from underlying modelling. As pointed out in @cite_17 , these strategies are more aptly considered as a set of nested quadratic lower-level problems.
{ "cite_N": [ "@cite_61", "@cite_14", "@cite_26", "@cite_75", "@cite_87", "@cite_28", "@cite_97", "@cite_54", "@cite_102", "@cite_17", "@cite_39", "@cite_89", "@cite_59", "@cite_71", "@cite_73", "@cite_10", "@cite_11" ], "mid": [ "2198630576", "2277973662", "1548134621", "2164571150" ], "abstract": [ "We consider a bilevel optimization approach for parameter learning in nonsmooth variational models. Existing approaches solve this problem by applying implicit differentiation to a sufficiently smooth approximation of the nondifferentiable lower level problem. We propose an alternative method based on differentiating the iterations of a nonlinear primal–dual algorithm. Our method computes exact (sub)gradients and can be applied also in the nonsmooth setting. We show preliminary results for the case of multi-label image segmentation.", "This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: @math minimizef1(x1)+ź+fN(xN)subject toA1x1+ź+ANxN=c,x1źX1,ź,xNźXN,where @math Nź2, @math fi are convex functions, @math Ai are matrices, and @math Xi are feasible sets for variable @math xi. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices @math Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices @math Ai). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that @math źxk+1-xkźM2 converges at a rate of o(1 k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with >300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.", "We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-", "One central issue in practically deploying network coding is the adaptive and economic allocation of network resource. We cast this as an optimization, where the net-utility-the difference between a utility derived from the attainable multicast throughput and the total cost of resource provisioning-is maximized. By employing the MAX of flows characterization of the admissible rate region for multicasting, this paper gives a novel reformulation of the optimization problem, which has a separable structure. The Lagrangian relaxation method is applied to decompose the problem into subproblems involving one destination each. Our specific formulation of the primal problem results in two key properties. First, the resulting subproblem after decomposition amounts to the problem of finding a shortest path from the source to each destination. Second, assuming the net-utility function is strictly concave, our proposed method enables a near-optimal primal variable to be uniquely recovered from a near-optimal dual variable. A numerical robustness analysis of the primal recovery method is also conducted. For ill-conditioned problems that arise, for instance, when the cost functions are linear, we propose to use the proximal method, which solves a sequence of well-conditioned problems obtained from the original problem by adding quadratic regularization terms. Furthermore, the simulation results confirm the numerical robustness of the proposed algorithms. Finally, the proximal method and the dual subgradient method can be naturally extended to provide an effective solution for applications with multiple multicast sessions" ] }
1908.06459
2969194430
Let @math be a discrete time Markov chain on a general state space. It is well-known that if @math is aperiodic and satisfies a drift and minorization condition, then it converges to its stationary distribution @math at an exponential rate. We consider the problem of computing upper bounds for the distance from stationarity in terms of the drift and minorization data. Baxendale showed that these bounds improve significantly if one assumes that @math is reversible with nonnegative eigenvalues (i.e. its transition kernel is a self-adjoint operator on @math with spectrum contained in @math ). We identify this phenomenon as a special case of a general principle: for a reversible chain with nonnegative eigenvalues, any strong random time gives direct control over the convergence rate. We formulate this principle precisely and deduce from it a stronger version of Baxendale's result. Our approach is fully quantitative and allows us to convert drift and minorization data into explicit convergence bounds. We show that these bounds are tighter than those of Rosenthal and Baxendale when applied to a well-studied example.
In the case @math , the decay rate @math of the law of @math was identified by Roberts and Tweedie @cite_18 (who use the notation @math ). Theorem 4.1(i) of @cite_18 is equivalent to a bound of the form [ (T > t) ( const ) (V)^r t ^t. ] Theorem slightly improves this result by removing the factor of @math and generalizing to the case @math .
{ "cite_N": [ "@cite_18" ], "mid": [ "2046838574", "2048123872", "2102318662", "2100781154" ], "abstract": [ "We prove sharp pointwise t−3 decay for scalar linear perturbations of a Schwarzschild black hole without symmetry assumptions on the data. We also consider electromagnetic and gravitational perturbations for which we obtain decay rates t−4, and t−6, respectively. We proceed by decomposition into angular momentum l and summation of the decay estimates on the Regge-Wheeler equation for fixed l. We encounter a dichotomy: the decay law in time is entirely determined by the asymptotic behavior of the Regge-Wheeler potential in the far field, whereas the growth of the constants in l is dictated by the behavior of the Regge-Wheeler potential in a small neighborhood around its maximum. In other words, the tails are controlled by small energies, whereas the number of angular derivatives needed on the data is determined by energies close to the top of the Regge-Wheeler potential. This dichotomy corresponds to the well-known principle that for initial times the decay reflects the presence of complex resonances generated by the potential maximum, whereas for later times the tails are determined by the far field. However, we do not invoke complex resonances at all, but rely instead on semiclassical Sigal-Soffer type propagation estimates based on a Mourre bound near the top energy.", "Author(s): Tataru, D | Abstract: In this article we study the pointwise decay properties of solutions to the wave equation on a class of stationary asymptotically flat backgrounds in three space dimensions. Under the assumption that uniform energy bounds and a weak form of local energy decay hold forward in time we establish a @math local uniform decay rate for linear waves. This work was motivated by open problems concerning decay rates for linear waves on Schwarzschild and Kerr backgrounds, where such a decay rate has been conjectured by R. Price. Our results apply to both of these cases.", "We study the maximum of a Gaussian field on @math ( @math ) whose correlations decay logarithmically with the distance. Kahane Kah85 introduced this model to construct mathematically the Gaussian multiplicative chaos in the subcritical case. Duplantier, Rhodes, Sheffield and Vargas DRSV12a DRSV12b extended Kahane's construction to the critical case and established the KPZ formula at criticality. Moreover, they made in DRSV12a several conjectures on the supercritical case and on the maximum of this Gaussian field. In this paper we resolve Conjecture 12 in DRSV12a : we establish the convergence in law of the maximum and show that the limit law is the Gumbel distribution convoluted by the limit of the derivative martingale.", "The spatial decay properties of Wannier functions and related quantities have been investigated using analytical and numerical methods. We find that the form of the decay is a power law times an exponential, with a particular power-law exponent that is universal for each kind of quantity. In one dimension we find an exponent of @math for Wannier functions, @math for the density matrix and for energy matrix elements, and @math or @math for different constructions of nonorthonormal Wannier-like functions." ] }
1908.06459
2969194430
Let @math be a discrete time Markov chain on a general state space. It is well-known that if @math is aperiodic and satisfies a drift and minorization condition, then it converges to its stationary distribution @math at an exponential rate. We consider the problem of computing upper bounds for the distance from stationarity in terms of the drift and minorization data. Baxendale showed that these bounds improve significantly if one assumes that @math is reversible with nonnegative eigenvalues (i.e. its transition kernel is a self-adjoint operator on @math with spectrum contained in @math ). We identify this phenomenon as a special case of a general principle: for a reversible chain with nonnegative eigenvalues, any strong random time gives direct control over the convergence rate. We formulate this principle precisely and deduce from it a stronger version of Baxendale's result. Our approach is fully quantitative and allows us to convert drift and minorization data into explicit convergence bounds. We show that these bounds are tighter than those of Rosenthal and Baxendale when applied to a well-studied example.
The most important feature of Theorem and its @math -norm version, Theorem , is that the exponential rate @math is the same as the decay rate in Theorem . As we will see in Section , this conclusion can only be drawn for reversible Markov chains with nonnegative eigenvalues and does not hold in general. Baxendale @cite_11 was the first to observe this consequence of reversibility. For a key argument, he credits a comment of Meyn on a previous draft of @cite_11 . In the case @math , Theorem is very similar to [Theorem 1.3] B05 : both theorems have the same hypotheses, and both prove @math -norm convergence of the chain with the same exponential rate @math .
{ "cite_N": [ "@cite_11" ], "mid": [ "2122337269", "1547388036", "2046434141", "1593631888" ], "abstract": [ "We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are dierent from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start from essentially the same assumptions of a drift condition towards a “small set”. The estimates show a noticeable improvement on existing results if the Markov chain is reversible with respect to its stationary distribution, and especially so if the chain is also positive. The method of proof uses the first-entrance last-exit decomposition, together with new quantitative versions of a result of Kendall from discrete renewal theory.", "In many applications of Markov chains, and especially in Markov chain Monte Carlo algorithms, the rate of convergence of the chain is of critical importance. Most techniques to establish such rates require bounds on the distribution of the random regeneration time T that can be constructed, via splitting techniques, at times of return to a \"small set\" C satisfying a minorisation condition P(x,·)[greater-or-equal, slanted][var epsilon][phi](·), x[set membership, variant]C. Typically, however, it is much easier to get bounds on the time [tau]C of return to the small set itself, usually based on a geometric drift function , where . We develop a new relationship between T and [tau]C, and this gives a bound on the tail of T, based on [var epsilon],[lambda] and b, which is a strict improvement on existing results. When evaluating rates of convergence we see that our bound usually gives considerable numerical improvement on previous expressions.", "Consider a graph, G, for which the vertices can have two modes, 0 or 1. Suppose that a particle moves around on G according to a discrete time Markov chain with the following rules. With (strictly positive) probabilities pm, pc and pr it moves to a randomly chosen neighbour, changes the mode of the vertex it is at or just stands still, respectively. We call such a random process a (pm, pc, pr)-lamplighter process on G. Assume that the process starts with the particle in a fixed position and with all vertices having mode 0. The convergence rate to stationarity in terms of the total variation norm is studied for the special cases with G = KN, the complete graph with N vertices, and when G = mod N. In the former case we prove that as N --> [infinity], ((2pc + pm) 4pcpm)N log N is a threshold for the convergence rate. In the latter case we show that the convergence rate is asymptotically determined by the cover time CN in that the total variation norm after aN2 steps is given by P(CN > aN2). The limit of this probability can in turn be calculated by considering a Brownian motion with two absorbing barriers. In particular, this means that there is no threshold for this case.", "Consider a sequence of continuous-time irreducible reversible Markov chains and a sequence of initial distributions, @math . The sequence is said to exhibit @math -cutoff if the convergence to stationarity in total variation distance is abrupt, w.r.t. this sequence of initial distributions. In this work we give a characterization of @math -cutoff for an arbitrary sequence of initial distributions @math (in the above setup). Our characterization is expressed in terms of hitting times of sets which are \"worst\" w.r.t. @math . Consider a Markov chain on @math whose stationary distribution in @math . Let @math be the expected hitting time of the worst set of size at least @math . It was recently proved by Peres and Sousi and independently by Oliveira that @math captures the order of the mixing time. In this work we further refine this connection and show that @math -cutoff can be characterized in terms of concentration of hitting times (starting from @math ) of sets which are worst in expectation w.r.t. @math . Conversely, we construct a counter-example which demonstrates that in general cutoff (as opposed to cutoff w.r.t. a certain sequence of initial distributions) cannot be characterized in this manner. Finally, we also prove that there exists an absolute constant @math such that for every Markov chain @math , for all @math , where @math is the inverse of the spectral gap of the chain." ] }
1908.06459
2969194430
Let @math be a discrete time Markov chain on a general state space. It is well-known that if @math is aperiodic and satisfies a drift and minorization condition, then it converges to its stationary distribution @math at an exponential rate. We consider the problem of computing upper bounds for the distance from stationarity in terms of the drift and minorization data. Baxendale showed that these bounds improve significantly if one assumes that @math is reversible with nonnegative eigenvalues (i.e. its transition kernel is a self-adjoint operator on @math with spectrum contained in @math ). We identify this phenomenon as a special case of a general principle: for a reversible chain with nonnegative eigenvalues, any strong random time gives direct control over the convergence rate. We formulate this principle precisely and deduce from it a stronger version of Baxendale's result. Our approach is fully quantitative and allows us to convert drift and minorization data into explicit convergence bounds. We show that these bounds are tighter than those of Rosenthal and Baxendale when applied to a well-studied example.
The method of proof in @cite_11 uses analytic properties of generating functions for renewal sequences. In principle the argument could be extended to the case @math , but the resulting bound on the exponential convergence rate of the chain would be worse than the rate @math from Theorem . Intuitively, this is because the law of @math might introduce artificial periodicity. Our approach using Theorem is probabilistic and puts all cases @math on the same footing.
{ "cite_N": [ "@cite_11" ], "mid": [ "2343345052", "2026924428", "2950313068", "2109426593" ], "abstract": [ "This paper concerns the worst-case complexity of Gauss-Seidel method for solving a positive semi-definite linear system; or equivalently, that of cyclic coordinate descent (C-CD) for minimizing a convex quadratic function. The known provable complexity of C-CD can be @math times slower than gradient descent (GD) and @math times slower than randomized coordinate descent (R-CD). However, these gaps seem rather puzzling since so far they have not been observed in practice; in fact, C-CD usually converges much faster than GD and sometimes comparable to R-CD. Thus some researchers believe the gaps are due to the weakness of the proof, but not that of the C-CD algorithm itself. In this paper we show that the gaps indeed exist. We prove that there exists an example for which C-CD takes at least @math or @math operations, where @math is the condition number, @math is a well-studied quantity that determines the convergence rate of R-CD, and @math hides the dependency on @math . This implies that C-CD can indeed be @math times slower than GD, and @math times slower than R-CD in the worst case. Our result establishes one of the few examples in continuous optimization that demonstrates a large gap between a deterministic algorithm and its randomized counterpart. Based on the example, we establish several almost tight complexity bounds of C-CD for quadratic problems. One difficulty with the analysis of the constructed example is that the spectral radius of a non-symmetric iteration matrix does not necessarily constitutes a lower bound for the convergence rate.", "where (an)n ENo is some sequence of nonnegative numbers, (Sn),nENo is the sequence of partial sums, S0 = 0, Sn = XflXk, of another sequence (Xk)kEN of i.i.d. random variables, and A c R is a fixed Borel set such as [0,1] or [0, oo). Examples of such convolution series are subordinated distributions (f=0Oan = 1) which arise as distributions of random sums, and harmonic and ordinary renewal measures (a0 = 0, an = 1 n for all n C N in the first, an = 1 for all n C NO in the second case). These examples are in turn essential for the analysis of the large time behaviour of diverse applied models such as branching and queueing processes, they are also of interest in connection with representation theorems such as the Levy representation of infinitely divisible distributions. A traditional approach to such problems is via regular variation: If the underlying random variables are nonnegative we can use Laplace transforms and the related Abelian and Tauberian theorems [see, e.g., Stam (1973) in the context of subordination and Feller (1971, XIV.3) in connection with renewal theory; Embrechts, Maejima, and Omey (1984) is a recent treatment of generalized renewal measures along these lines]. The approach of the present paper is based on the Wiener-Levy-Gel'fand theorem and has occasionally been called the Banach algebra method. In Gruibel (1983) we gave a new variant of this method for the special case of lattice distributions, showing that by using the appropriate Banach algebras of sequences, arbitrarily fine expansions are possible under certain assumptions on the higher-order differences of (P(X1 = n))fnEN. Here we give a corresponding treatment of nonlattice distributions. We restrict ourselves to an analogue of first-order differences and obtain a number of theorems which perhaps are described best as next-term results. To explain this let us consider a special case in more detail.", "Using the renewal approach we prove exponential inequalities for additive functionals and empirical processes of ergodic Markov chains, thus obtaining counterparts of inequalities for sums of independent random variables. The inequalities do not require functions of the chain to be bounded and moreover all the involved constants are given by explicit formulas whenever the usual drift condition holds, which may be of interest in practical applications e.g. to MCMC algorithms.", "A proof is provided of a strong law of large numbers for a one-dimensional random walk in a dynamic random environment given by a supercritical contact process in equilibrium. The proof is based on a coupling argument that traces the space-time cones containing the infection clusters generated by single infections and uses that the random walk eventually gets trapped inside the union of these cones. For the case where the local drifts of the random walk are smaller than the speed at which infection clusters grow, the random walk eventually gets trapped inside a single cone. This in turn leads to the existence of regeneration times at which the random walk forgets its past. The latter are used to prove a functional central limit theorem and a large deviation principle. The qualitative dependence of the speed, the volatility and the rate function on the infection parameter is investigated, and some open problems are mentioned." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Several rotorcraft UAS that are capable of autonomous, long-duration mission execution in benign indoor (VICON) environments have previously appeared in literature. Focusing on the recharging solution to extend individual platform flight time and a multi-agent scheme for constant operation, impressive operation times have been demonstrated ( @cite_28 : 24 h single vehicle experiment; @cite_5 : 9.5 h with multiple vehicles). Recharging methods vary from wireless charging @cite_56 to contact-based charging pads @cite_5 to battery swap systems @cite_27 @cite_38 . While wireless charging offers the most flexibility since no physical contact has to be made, charging currents are low resulting in excessively long charging times and are hence not an option for quick redeployment. However, interesting results have been shown in @cite_56 demonstrating wireless powering of a 35 g 10 W micro drone in hover flight. On the other end of the spectrum, battery swap systems offer immediate redeployment of a UAS, but require sophisticated mechanisms to be able to hot swap a battery, and a pool of charged batteries that are readily available. This makes such systems less attractive for maintenance free and cost effective long-term operation.
{ "cite_N": [ "@cite_38", "@cite_28", "@cite_56", "@cite_27", "@cite_5" ], "mid": [ "2089829108", "2558693562", "2962722232", "2621390870" ], "abstract": [ "The past decade has seen an increased interest towards research involving Autonomous Micro Aerial Vehicles (MAVs). The predominant reason for this is their agility and ability to perform tasks too difficult or dangerous for their human counterparts and to navigate into places where ground robots cannot reach. Among MAVs, rotary wing aircraft such as quadrotors have the ability to operate in confined spaces, hover at a given point in space and perch1 or land on a flat surface. This makes the quadrotor a very attractive aerial platform giving rise to a myriad of research opportunities. The potential of these aerial platforms is severely limited by the constraints on the flight time due to limited battery capacity. This in turn arises from limits on the payload of these rotorcraft. By automating the battery recharging process, creating autonomous MAVs that can recharge their on-board batteries without any human intervention and by employing a team of such agents, the overall mission time can be greatly increased. This paper describes the development, testing, and implementation of a system of autonomous charging stations for a team of Micro Aerial Vehicles. This system was used to perform fully autonomous long-term multi-agent aerial surveillance experiments with persistent station keeping. The scalability of the algorithm used in the experiments described in this paper was also tested by simulating a persistence surveillance scenario for 10 MAVs and charging stations. Finally, this system was successfully implemented to perform a 9½ hour multi-agent persistent flight test. Preliminary implementation of this charging system in experiments involving construction of cubic structures with quadrotors showed a three-fold increase in effective mission time.", "With the goal of extending unmanned aerial vehicles mission duration, a solar recharge strategy is envisioned with lakes as preferred charging and standby areas. The Sherbrooke University Water-Air VEhicle (SUWAVE) concept developed is able to takeoff and land vertically on water. The physical prototype consists of a wing coupled to a rotating center body that minimizes the added components with a passive takeoff maneuver. A dynamic model of takeoff, validated with experimental results, serves as a design tool. The landing is executed by diving, without requiring complex control or wing folding. Structural integrity of the wing is confirmed by investigating the accelerations at impact. A predictive model is developed for various impact velocities. The final prototype has executed multiple repeatable takeoffs and has succeeded in completing full operation cycles of flying, diving, floating, and taking off.", "Many unmanned aerial vehicle surveillance and monitoring applications require observations at precise locations over long periods of time, ideally days or weeks at a time (e.g. ecosystem monitoring), which has been impractical due to limited endurance and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous small rotorcraft UAS that is capable of performing repeated sorties for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy including emergency response to enable mission execution independently from human operators, and the ability of vision-based precision landing on a recharging station for automated energy replenishment. Experimental results of up to 11 hours of fully autonomous operation in indoor and outdoor environments illustrate the capability of our system.", "Recent developments in high frequency inductive wireless power transfer (WPT) mean that the technology has reached a point where powering small autonomous drones has become feasible. Fundamentally, drones can only carry very limited payloads and thus require light-weight WPT receiver solutions. The key to achieving light weight is operating the WPT system at high frequency: this allows both the coils and the electronics to achieve very high gravametric power densities. When operated in the MHz region, the WPT coils can be manufactured without the need for ferrite, because the low coupling factor can be offset by very high coil Q factors. To make efficient MHz power conversion circuits, wide band-gap semiconductors, including Silicon Carbide (SiC) and Gallium Nitride (GaN) have provided a step change. For powering a drone, these devices are integrated into soft-switching resonant inverter and rectifier topologies and are able to operate efficiently at tens of MHz and hundreds of watts. In addition, recently discovered designs that make these inverters tolerant to load variations and have inherent voltage or current regulation features enable an WPT system to operate effectively as the separation distance and alignment between transmitter and receiver changes, which is critical when charging a flying drone. It will be shown that combining all these individual developments has enabled the charging of a 10 W micro-drone whilst hovering in the vicinity of a charging pad." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Our work uses a downward-facing monocular camera to estimate the pose of the landing pad in the world frame using AprilTag visual fiducial markers @cite_60 . We believe that a monocular camera is the cheapest, most lightweight and power efficient sensor choice. Alternatively, GPS and RTK-GPS systems suffer from precision degradation and signal loss in occluded urban or canyon environments @cite_42 . Laser range finder systems are heavy and consume considerable amounts of energy. Last but not least, stereo camera setups have limited range as a function of their baseline versus vehicle-to-pad distance.
{ "cite_N": [ "@cite_42", "@cite_60" ], "mid": [ "2114594485", "1982447260", "926607442", "2745859992" ], "abstract": [ "Monocular SLAM has the potential to turn inexpensive cameras into powerful pose sensors for applications such as robotics and augmented reality. We present a relocalization module for such systems which solves some of the problems encountered by previous monocular SLAM systems-tracking failure, map merging, and loop closure detection. This module extends recent advances in keypoint recognition to determine the camera pose relative to the landmarks within a single frame time of 33 ms. We first show how this module can be used to improve the robustness of these systems. Blur, sudden motion, and occlusion can all cause tracking to fail, leading to a corrupted map. Using the relocalization module, the system can automatically detect and recover from tracking failure while preserving map integrity. Extensive tests show that the system can then reliably generate maps for long sequences even in the presence of frequent tracking failure. We then show that the relocalization module can be used to recognize overlap in maps, i.e., when the camera has returned to a previously mapped area. Having established an overlap, we determine the relative pose of the maps using trajectory alignment so that independent maps can be merged and loop closure events can be recognized. The system combining all of these abilities is able to map larger environments and for significantly longer periods than previous systems.", "The main contribution of this paper is to bridge the gap between passive monocular SLAM and autonomous robotic systems. While passive monocular SLAM strives to reconstruct the scene and determine the current camera pose for any given camera motion, not every camera motion is equally suited for these tasks. In this work we propose methods to evaluate the quality of camera motions with respect to the generation of new useful map points and localization maintenance. In our experiments, we demonstrate the effectiveness of our measures using a low-cost quadrocopter. The proposed system only requires a single passive camera as exteroceptive sensor. Due to its explorative nature, the system achieves autonomous way-point navigation in challenging, unknown, GPS-denied environments.", "Cameras provide a rich source of information while being passive, cheap and lightweight for small Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. Two key contributions make this possible: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with an off-the-shelf quadrotor. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar.", "One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs ( https: github.com HKUST-Aerial-Robotics VINS-Mono ) and iOS mobile devices ( https: github.com HKUST-Aerial-Robotics VINS-Mobile )." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Several landing approaches exist in literature for labeled and unlabeled landing sites. @cite_34 present a monocular visual landing method based on estimating the 6 DOF pose of a circled H marker. The same authors extend this work to enable autonomous landing site search by using a scale-corrected PTAM algorithm @cite_58 and relax the landing site structure to an arbitrary but feature-rich image that is matched using the ORB algorithm @cite_8 . @cite_9 @cite_7 use SVO to estimate motion from a downfacing monocular camera to build a probabilistic 2 dimensional elevation map of unstructured terrain and to detect safe landing spots based on a score function. @cite_22 @cite_59 develop a fully self-contained visual landing navigation pipeline using a single camera. With application to landing in urban environments, the planar character of a rooftop is leveraged to perform landing site detection via a planar homography decomposition using RANSAC to distinguish the ground and elevated landing surface planes.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_58", "@cite_59", "@cite_34" ], "mid": [ "2100805615", "1997522477", "2086123859", "1984093092" ], "abstract": [ "This paper presents an approach to detect safe landing areas for a flying robot, on the basis of a sequence of monocular images. The approach does not require precise position and attitude sensors: it exploits the relations between 2D image homographies and 3D planes. The combination of a robust homography estimation and of an adaptive thresholding of correlation scores between registered images yields the update of a stochastic grid, that exhibits the horizontal planar areas perceived. This grid allows the integration of data gathered at various altitudes. Results are presented throughout the article.", "In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter \"H\" surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter \"H\". The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.", "Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.", "We present a new method for the robust detection and matching of multiple planes in pairs of images. Such planes can serve as stable landmarks for vision-based urban navigation. Our approach starts from SIFT matches and generates multiple local homography hypotheses using the recent J-linkage technique by Toldo and Fusiello, a robust randomized multi-model estimation algorithm. These hypotheses are then globally merged, spatially analyzed, robustly fitted, and checked for stability. When tested on more than 30,000 image pairs taken from panoramic views of a college campus, our method yields no false positives and recovers 72 of the matchable building walls identified by a human, despite significant occlusions and viewpoint changes." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Currently one of the most popular visual fiducial detectors and patterns is the AprilTag algorithm @cite_15 which is renowned for its speed, robustness and extremely low false positive detection rates. The algorithm was updated by @cite_60 to further improve computational efficiency and to enable the detection of smaller tags. AprilTag has been applied multiple times for MAV landing @cite_31 @cite_0 @cite_46 @cite_48 @cite_47 . Similar visual fiducials are seen around the landing zones of both Amazon and Google delivery drones @cite_45 @cite_29 as well as of some surveying drones @cite_57 . Our approach is the same as that of @cite_31 . We improve over @cite_0 @cite_46 by using a bundle of several tags for improved landing pad pose measurement accuracy. @cite_48 @cite_47 appear to also use a tag bundle, however they deal with a moving landing platform and feed individual tag pose detections into a Kalman filter. Our approach is to use a perspective- @math -point solution to obtain a single pose measurement using all tags at once.
{ "cite_N": [ "@cite_31", "@cite_47", "@cite_60", "@cite_46", "@cite_48", "@cite_29", "@cite_0", "@cite_57", "@cite_45", "@cite_15" ], "mid": [ "2771387334", "2565233142", "2132512702", "1864464506" ], "abstract": [ "Although there is an abundance of planar fiducial-marker systems proposed for augmented reality and computer-vision purposes, using them to estimate the pose accurately in robotic applications where collected data are noisy remains a challenge. This is inherently a difficult problem because these fiducial marker systems work solely within the RGB image space and the resolution of cameras on robots is often constrained. As a result, small noise in the image would cause the tag's detection process to produce large pose estimation errors. This paper describes an algorithm that improves the pose estimation accuracy of square fiducial markers in difficult scenes by fusing information from RGB and depth sensors. The algorithm retains the high detection rate and low false positive rate characteristics of fiducial systems while making them much more robust to size, lighting and sensory noise for pose estimation. The improvements make the fiducial tags suitable for robotic tasks requiring high pose accuracy in the real world environment.", "AprilTags and other passive fiducial markers require specialized algorithms to detect markers among other features in a natural scene. The vision processing steps generally dominate the computation time of a tag detection pipeline, so even small improvements in marker detection can translate to a faster tag detection system. We incorporated lessons learned from implementing and supporting the AprilTag system into this improved system. This work describes AprilTag 2, a completely redesigned tag detector that improves robustness and efficiency compared to the original AprilTag system. The tag coding scheme is unchanged, retaining the same robustness to false positives inherent to the coding system. The new detector improves performance with higher detection rates, fewer false positives, and lower computational time. Improved performance on small images allows the use of decimated input images, resulting in dramatic gains in detection speed.", "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.", "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
After obtaining raw tag pose estimates from the AprilTag detector, our approach uses a recursive least squares (RLS) filter to obtain a common tag bundle pose estimate. A relevant previous work on this topic is @cite_55 in which a particle filter is applied to raw tag detections and RLS is compared to RANSAC for bundle pose estimation. Because the demonstrated accuracy improvement is marginal (about 1 cm in mean position and negligible in mean attitude) and RANSAC has disadvantages like ad hoc threshold settings and a non-deterministic runtime, we prefer RLS for its simplicity. Nevertheless, RANSAC and its more advanced variations @cite_17 can be substituted into our implementation. Other work investigated fusing tag measurements in RGB space with a depth component @cite_25 with impressive gains in accuracy. One can imagine this approach benefiting landing accuracy at low altitudes, however downward-facing stereo camera mounting on drones raises several concerns like weight, vibration effects and space availability.
{ "cite_N": [ "@cite_55", "@cite_25", "@cite_17" ], "mid": [ "2771387334", "1989476314", "2739492061", "2964175348" ], "abstract": [ "Although there is an abundance of planar fiducial-marker systems proposed for augmented reality and computer-vision purposes, using them to estimate the pose accurately in robotic applications where collected data are noisy remains a challenge. This is inherently a difficult problem because these fiducial marker systems work solely within the RGB image space and the resolution of cameras on robots is often constrained. As a result, small noise in the image would cause the tag's detection process to produce large pose estimation errors. This paper describes an algorithm that improves the pose estimation accuracy of square fiducial markers in difficult scenes by fusing information from RGB and depth sensors. The algorithm retains the high detection rate and low false positive rate characteristics of fiducial systems while making them much more robust to size, lighting and sensory noise for pose estimation. The improvements make the fiducial tags suitable for robotic tasks requiring high pose accuracy in the real world environment.", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Our approach, however, is not limited to AprilTags, which can be substituted or combined with other markers for specific applications. A vast number of markers is available targeting different use-cases @cite_42 @cite_34 @cite_12 @cite_15 . Patterns include letters, circles, concentric circles and or polygons, letters, 2 dimensional barcodes and special patterns based on topological @cite_41 , detection range maximizing @cite_14 and blurring occlusion robustness considerations @cite_53 .
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_53", "@cite_42", "@cite_15", "@cite_34", "@cite_12" ], "mid": [ "2565233142", "2344366145", "2138255641", "2151049637" ], "abstract": [ "AprilTags and other passive fiducial markers require specialized algorithms to detect markers among other features in a natural scene. The vision processing steps generally dominate the computation time of a tag detection pipeline, so even small improvements in marker detection can translate to a faster tag detection system. We incorporated lessons learned from implementing and supporting the AprilTag system into this improved system. This work describes AprilTag 2, a completely redesigned tag detector that improves robustness and efficiency compared to the original AprilTag system. The tag coding scheme is unchanged, retaining the same robustness to false positives inherent to the coding system. The new detector improves performance with higher detection rates, fewer false positives, and lower computational time. Improved performance on small images allows the use of decimated input images, resulting in dramatic gains in detection speed.", "Artificial markers are successfully adopted to solve several vision tasks, ranging from tracking to calibration. While most designs share the same working principles, many specialized approaches exist to address specific application domains. Some are specially crafted to boost pose recovery accuracy. Others are made robust to occlusion or easy to detect with minimal computational resources. The sheer amount of approaches available in recent literature is indeed a statement to the fact that no silver bullet exists. Furthermore, this is also a hint to the level of scholarly interest that still characterizes this research topic. With this paper we try to add a novel option to the offer, by introducing a general purpose fiducial marker which exhibits many useful properties while being easy to implement and fast to detect. The key ideas underlying our approach are three. The first one is to exploit the projective invariance of conics to jointly find the marker and set a reading frame for it. Moreover, the tag identity is assessed by a redundant cyclic coded sequence implemented using the same circular features used for detection. Finally, the specific design and feature organization of the marker are well suited for several practical tasks, ranging from camera calibration to information payload delivery.", "It is believed that certain contour attributes, specifically orientation, curvature and linear extent, provide essential cues for object (shape) recognition. The present experiment examined this hypothesis by comparing stimulus conditions that differentially provided such cues. A spaced array of dots was used to mark the outside boundary of namable objects, and subsets were chosen that contained either contiguous strings of dots or randomly positioned dots. These subsets were briefly and successively displayed using an MTDC information persistence paradigm. Across the major range of temporal separation of the subsets, it was found that contiguity of boundary dots did not provide more effective shape recognition cues. This is at odds with the concept that encoding and recognition of shapes is predicated on the encoding of contour attributes such as orientation, curvature and linear extent.", "We propose a novel approach to both learning and detecting local contour-based representations for mid-level features. Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images. Patches of human generated contours are clustered to form sketch token classes and a random forest classifier is used for efficient detection in novel images. We demonstrate our approach on both top-down and bottom-up tasks. We show state-of-the-art results on the top-down task of contour detection while being over 200x faster than competing methods. We also achieve large improvements in detection accuracy for the bottom-up tasks of pedestrian and object detection as measured on INRIA and PASCAL, respectively. These gains are due to the complementary information provided by sketch tokens to low-level features such as gradient histograms." ] }
1908.06267
2968913877
Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code and data are publicly available.
There are significant differences between @cite_51 and our work. First, our approach is Note that other GNNs used in inductive settings can be found @cite_10 @cite_52 . , not . Indeed, while the node classification approach of requires all test documents at training time, our graph classification model is able to perform inference on new, never-seen documents. The downside of representing documents as separate graphs, however, is that we lose the ability to capture corpus-level dependencies. Also, our directed graphs capture word ordering, which is ignored by . Finally, the approach of requires computing the PMI for every word pair in the vocabulary, which may be prohibitive on datasets with very large vocabularies. On the other hand, the complexity of MPAD does not depend on vocabulary size.
{ "cite_N": [ "@cite_10", "@cite_51", "@cite_52" ], "mid": [ "2891761261", "2903014193", "2043004216", "2788667846" ], "abstract": [ "Text classification is an important and classical problem in natural language processing. There have been a number of studies that applied convolutional neural networks (convolution on regular grid, e.g., sequence) to classification. However, only a limited number of studies have explored the more flexible graph convolutional neural networks (convolution on non-grid, e.g., arbitrary graph) for the task. In this work, we propose to use graph convolutional networks for text classification. We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text GCN is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents. Our experimental results on multiple benchmark datasets demonstrate that a vanilla Text GCN without any external word embeddings or knowledge outperforms state-of-the-art methods for text classification. On the other hand, Text GCN also learns predictive word and document embeddings. In addition, experimental results show that the improvement of Text GCN over state-of-the-art comparison methods become more prominent as we lower the percentage of training data, suggesting the robustness of Text GCN to less training data in text classification.", "In several natural language tasks, labeled sequences are available in separate domains (say, languages), but the goal is to label sequences with mixed domain (such as code-switched text). Or, we may have available models for labeling whole passages (say, with sentiments), which we would like to exploit toward better position-specific label inference (say, target-dependent sentiment annotation). A key characteristic shared across such tasks is that different positions in a primary instance can benefit from different experts' trained from auxiliary data, but labeled primary instances are scarce, and labeling the best expert for each position entails unacceptable cognitive burden. We propose GITNet, a unified position-sensitive multi-task recurrent neural network (RNN) architecture for such applications. Auxiliary and primary tasks need not share training instances. Auxiliary RNNs are trained over auxiliary instances. A primary instance is also submitted to each auxiliary RNN, but their state sequences are gated and merged into a novel composite state sequence tailored to the primary inference task. Our approach is in sharp contrast to recent multi-task networks like the cross-stitch and sluice network, which do not control state transfer at such fine granularity. We demonstrate the superiority of GIRNet using three applications: sentiment classification of code-switched passages, part-of-speech tagging of code-switched text, and target position-sensitive annotation of sentiment in monolingual passages. In all cases, we establish new state-of-the-art performance beyond recent competitive baselines.", "In this paper, we introduce and compare between two novel approaches, supervised and unsupervised, for identifying the keywords to be used in extractive summarization of text documents. Both our approaches are based on the graph-based syntactic representation of text and web documents, which enhances the traditional vector-space model by taking into account some structural document features. In the supervised approach, we train classification algorithms on a summarized collection of documents with the purpose of inducing a keyword identification model. In the unsupervised approach, we run the HITS algorithm on document graphs under the assumption that the top-ranked nodes should represent the document keywords. Our experiments on a collection of benchmark summaries show that given a set of summarized training documents, the supervised classification provides the highest keyword identification accuracy, while the highest F-measure is reached with a simple degree-based ranking. In addition, it is sufficient to perform only the first iteration of HITS rather than running it to its convergence.", "Text classification to a hierarchical taxonomy of topics is a common and practical problem. Traditional approaches simply use bag-of-words and have achieved good results.However, when there are a lot of labels with different topical granularities, bag-of-words representation may not be enough.Deep learning models have been proven to be effective to automatically learn different levels of representations for image data.It is interesting to study what is the best way to represent texts.In this paper, we propose a graph-CNN based deep learning model to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph.Graph-of-words representation of texts has the advantage of capturing non-consecutive and long-distance semantics.CNN models have the advantage of learning different level of semantics.To further leverage the hierarchy of labels, we regularize the deep architecture with the dependency among labels.Our results on both RCV1 and NYTimes datasets show that we can significantly improve large-scale hierarchical text classification over traditional hierarchical text classification and existing deep models." ] }
1908.06203
2967453594
External knowledge is often useful for natural language understanding tasks. We introduce a contextual text representation model called Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge into text representations. Unlike entity embedding methods, our approach encodes a knowledge graph into a context model. CC embeddings can be easily reused for a wide range of tasks just like pre-trained language models. Our model effectively encodes the huge UMLS database by leveraging semantic generalizability. Experiments on electronic health records (EHRs) and medical text processing benchmarks showed our model gives a major boost to the performance of supervised medical NLP tasks.
In recent years a number of KB embedding models have been proposed that aim at learning entity embeddings on a knowledge graph @cite_18 . Some models make use of textual information in KBs to improve entity embeddings, like using textual descriptions of entities as complement to triplet modeling @cite_30 @cite_5 , or jointly learning structure-based embeddings and description-based embeddings @cite_6 . The latter approach learns an encoder which is similar to our work, but the encoder is only used to encode entity descriptions. These approaches are mainly concerned with KB representations rather than text processing. Using text also allows for inductive and zero-shot @cite_9 entity representations, which is also a feature of our model.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_9", "@cite_6", "@cite_5" ], "mid": [ "2951077644", "1533230146", "2584683887", "2250807343" ], "abstract": [ "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.", "Abstract: We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.", "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X, Y ) ⇒ nationality(X, Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.", "We study the problem of jointly embedding a knowledge base and a text corpus. The key issue is the alignment model making sure the vectors of entities, relations and words are in the same space. (2014a) rely on Wikipedia anchors, making the applicable scope quite limited. In this paper we propose a new alignment model based on text descriptions of entities, without dependency on anchors. We require the embedding vector of an entity not only to fit the structured constraints in KBs but also to be equal to the embedding vector computed from the text description. Extensive experiments show that, the proposed approach consistently performs comparably or even better than the method of (2014a), which is encouraging as we do not use any anchor information." ] }
1908.06203
2967453594
External knowledge is often useful for natural language understanding tasks. We introduce a contextual text representation model called Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge into text representations. Unlike entity embedding methods, our approach encodes a knowledge graph into a context model. CC embeddings can be easily reused for a wide range of tasks just like pre-trained language models. Our model effectively encodes the huge UMLS database by leveraging semantic generalizability. Experiments on electronic health records (EHRs) and medical text processing benchmarks showed our model gives a major boost to the performance of supervised medical NLP tasks.
Recently contextual text representation models like ELMo , BERT @cite_10 and OpenAI GPT @cite_8 @cite_28 have pushed the state-of-the-art results of various NLP tasks. Language modeling on a giant corpus learns powerful representations, which provides huge benefits to supervised tasks, especially where labeled data is scarce. These models use sequential or attention networks to generate word representations in context. In the biomedical domain, there is also BioBERT @cite_25 , a BERT model trained on PubMed abstracts and PubMed Central full-text articles, offering competitive results to state-of-the-art models on medical text processing tasks.
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_25", "@cite_8" ], "mid": [ "2909544278", "2915981933", "2911489562", "2896457183" ], "abstract": [ "Recently, neural models pretrained on a language modeling task, such as ELMo (, 2017), OpenAI GPT (, 2018), and BERT (, 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based passage re-ranking. Our system is the state of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27 (relative) in MRR@10. The code to reproduce our results is available at this https URL", "Neural network-based representations (\"embeddings\") have dramatically advanced natural language processing (NLP) tasks, including clinical NLP tasks such as concept extraction. Recently, however, more advanced embedding methods and representations (e.g., ELMo, BERT) have further pushed the state-of-the-art in NLP, yet there are no common best practices for how to integrate these representations into clinical tasks. The purpose of this study, then, is to explore the space of possible options in utilizing these new models for clinical concept extraction, including comparing these to traditional word embedding methods (word2vec, GloVe, fastText). Both off-the-shelf, open-domain embeddings and pre-training clinical embeddings from MIMIC-III are evaluated. We explore a battery of embedding methods consisting of traditional word embeddings and contextual embeddings, and compare these on four concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. We also analyze the impact of the pre-training time of a large language model like ELMo or BERT on the extraction performance. Finally, we present an intuitive way to understand the semantic information encoded by contextual embeddings. Contextual embeddings pre-trained on a large clinical corpus achieves new state-of-the-art performances across all concept extraction tasks. The best-performing model outperforms all state-of-the-art methods with respective F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65. We demonstrate the potential of contextual embeddings through the state-of-the-art performance these methods achieve on clinical concept extraction. Additionally, we demonstrate contextual embeddings encode valuable semantic information not accounted for in traditional word representations.", "Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in machine learning, extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, as deep learning models require a large amount of training data, applying deep learning to biomedical text mining is often unsuccessful due to the lack of training data in biomedical fields. Recent researches on training contextualized language representation models on text corpora shed light on the possibility of leveraging a large number of unannotated biomedical text corpora. We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain specific language representation model pre-trained on large-scale biomedical corpora. Based on the BERT architecture, BioBERT effectively transfers the knowledge from a large amount of biomedical texts to biomedical text mining models with minimal task-specific architecture modifications. While BERT shows competitive performances with previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.51 absolute improvement), biomedical relation extraction (3.49 absolute improvement), and biomedical question answering (9.61 absolute improvement). We make the pre-trained weights of BioBERT freely available at this https URL, and the source code for fine-tuning BioBERT available at this https URL.", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)." ] }
1908.05976
2967964179
Graph drawing and visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In such dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e. which nodes can directly and indirectly influence each other via a so-called causal path. While this causal topology is crucial to understand dynamical processes, the role of nodes, or cluster structures, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph drawing algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Having motivated the effects that are due to the arrow of time in dynamic graphs, we review related works on dynamic graph drawing. Using the taxonomy from @cite_35 , we categorize those works in (i) animation techniques that map the time dimension of dynamic graphs onto a time dimension of the resulting visualisation, and (ii) time-line representations that map the temporal evolution of dynamic graphs to a spatial dimension. We present methods only insofar as they are relevant to our work, while referring the reader to @cite_9 @cite_35 for a detailed review.
{ "cite_N": [ "@cite_35", "@cite_9" ], "mid": [ "2255903209", "1831799535", "2287322623", "2116856566" ], "abstract": [ "Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between en- tities in readable, scalable, and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publica- tions. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. Finally, we identify and discuss challenges for future research.", "Graph drawing algorithms have classically addressed the layout of static graphs. However, the need to draw evolving or dynamic graphs has brought into question many of the assumptions, conventions and layout methods designed to date. For example, social scientists studying evolving social networks have created a demand for visual representations of graphs changing over time. Two common approaches to represent temporal information in graphs include animation of the network and use of static snapshots of the network at different points in time. Here, we report on two experiments, one in a laboratory environment and another using an asynchronous remote web-based platform, Mechanical Turk, to compare the efficiency of animated displays versus static displays. Four tasks are studied with each visual representation, where two characterise overview level information presentation, and two characterise micro level analytical tasks. For the tasks studied in these experiments and within the limits of the experimental system, the results of this study indicate that static representations are generally more effective particularly in terms of time performance, when compared to fully animated movie representations of dynamic networks.", "Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.", "In this paper, we present the results of a human-computer interaction experiment that compared the performance of the animation of dynamic graphs to the presentation of small multiples and the effect that mental map preservation had on the two conditions. Questions used in the experiment were selected to test both local and global properties of graph evolution over time. The data sets used in this experiment were derived from standard benchmark data sets of the information visualization community. We found that small multiples gave significantly faster performance than animation overall and for each of our five graph comprehension tasks. In addition, small multiples had significantly more errors than animation for the tasks of determining sets of nodes or edges added to the graph during the same timeslice, although a positive time-error correlation coefficient suggests that, in this case, faster responses did not lead to more errors. This result suggests that, for these two tasks, animation is preferable if accuracy is more important than speed. Preserving the mental map under either the animation or the small multiples condition had little influence in terms of error rate and response time." ] }
1908.05976
2967964179
Graph drawing and visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In such dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e. which nodes can directly and indirectly influence each other via a so-called causal path. While this causal topology is crucial to understand dynamical processes, the role of nodes, or cluster structures, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph drawing algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Apart from the issue that animations are cognitively demanding, additional challenges arise in data with high temporal resolution (e.g. seconds or even millisecond), where a single vertex or edge is likely to be active in each time stamp. The application of static graph drawing techniques to such data requires a coarse-graining of time into , such that each time slice gives rise to a graph snapshot that can be visualised using, e.g., force-directed layout algorithms. As pointed out in @cite_30 , this coarse-graining of time into time slices leads to a loss of information on causal paths and few dynamic graph drawing techniques have specifically addressed this issue.
{ "cite_N": [ "@cite_30" ], "mid": [ "1831799535", "2116856566", "2066472867", "2899533224" ], "abstract": [ "Graph drawing algorithms have classically addressed the layout of static graphs. However, the need to draw evolving or dynamic graphs has brought into question many of the assumptions, conventions and layout methods designed to date. For example, social scientists studying evolving social networks have created a demand for visual representations of graphs changing over time. Two common approaches to represent temporal information in graphs include animation of the network and use of static snapshots of the network at different points in time. Here, we report on two experiments, one in a laboratory environment and another using an asynchronous remote web-based platform, Mechanical Turk, to compare the efficiency of animated displays versus static displays. Four tasks are studied with each visual representation, where two characterise overview level information presentation, and two characterise micro level analytical tasks. For the tasks studied in these experiments and within the limits of the experimental system, the results of this study indicate that static representations are generally more effective particularly in terms of time performance, when compared to fully animated movie representations of dynamic networks.", "In this paper, we present the results of a human-computer interaction experiment that compared the performance of the animation of dynamic graphs to the presentation of small multiples and the effect that mental map preservation had on the two conditions. Questions used in the experiment were selected to test both local and global properties of graph evolution over time. The data sets used in this experiment were derived from standard benchmark data sets of the information visualization community. We found that small multiples gave significantly faster performance than animation overall and for each of our five graph comprehension tasks. In addition, small multiples had significantly more errors than animation for the tasks of determining sets of nodes or edges added to the graph during the same timeslice, although a positive time-error correlation coefficient suggests that, in this case, faster responses did not lead to more errors. This result suggests that, for these two tasks, animation is preferable if accuracy is more important than speed. Preserving the mental map under either the animation or the small multiples condition had little influence in terms of error rate and response time.", "In this paper, we propose a new framework to perform motion compression for time-dependent 3D geometric data. Temporal coherence in dynamic geometric models can be used to achieve significant compression, thereby leading to efficient storage and transmission of large volumes of 3D data. The displacement of the vertices in the geometric models is computed using the iterative closest point (ICP) algorithm. This forms the core of our motion prediction technique and is used to estimate the transformation between two successive 3D data sets. The motion between frames is coded in terms of a few affine parameters with some added residues. Our motion segmentation approach separates the vertices into two groups. Within the first group, motion can be encoded with a few affine parameters without the need of residues. In the second group, the vertices need further encoding of residual errors. Also in this group, for those vertices associated with large residual errors under affine mapping, we encode their motion effectively using Newtonian motion estimates. This automatic segmentation enables our algorithm to he very effective in compressing time-dependent geometric data. Dynamic range data captured from the real world, as well as complex animations created using commercial tools, can be compressed efficiently using this scheme.", "Graph coloring is one of the most famous computational problems with applications in a wide range of areas such as planning and scheduling, resource allocation, and pattern matching. So far coloring problems are mostly studied on static graphs, which often stand in stark contrast to practice where data is inherently dynamic and subject to discrete changes over time. A temporal graph is a graph whose edges are assigned a set of integer time labels, indicating at which discrete time steps the edge is active. In this paper we present a natural temporal extension of the classical graph coloring problem. Given a temporal graph and a natural number ∆, we ask for a coloring sequence for each vertex such that (i) in every sliding time window of ∆ consecutive time steps, in which an edge is active, this edge is properly colored (i.e. its endpoints are assigned two different colors) at least once during that time window, and (ii) the total number of different colors is minimized. This sliding window temporal coloring problem abstractly captures many realistic graph coloring scenarios in which the underlying network changes over time, such as dynamically assigning communication channels to moving agents. We present a thorough investigation of the computational complexity of this temporal coloring problem. More specifically, we prove strong computational hardness results, complemented by efficient exact and approximation algorithms. Some of our algorithms are linear-time fixed-parameter tractable with respect to appropriate parameters, while others are asymptotically almost optimal under the Exponential Time Hypothesis (ETH)." ] }
1908.05976
2967964179
Graph drawing and visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In such dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e. which nodes can directly and indirectly influence each other via a so-called causal path. While this causal topology is crucial to understand dynamical processes, the role of nodes, or cluster structures, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph drawing algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Despite the advances outlined above, recognizing patterns in animation-based visualisations of dynamic graphs remains a considerable cognitive challenge for users. Moreover, it is difficult to embed dynamic graph animations into scholarly articles, books, or posters, which often limits their use in science and engineering to illustrative supplementary material. Addressing these issues, a second line of research focuses on methods to visualize dynamic graphs in terms of , which map the time dimension of dynamic graphs to a spatial dimension that can be embedded into a static visualisation. Examples includes widely-used directed acyclic of time-unfolded graph representations of dynamic graphs @cite_7 @cite_27 , or @cite_28 @cite_4 @cite_31 , sequences of layered adjacencies @cite_54 , stacked 3D representations where consecutive time slices are arranged along a third dimension @cite_2 . While recent works have proposed circular representations that scale to larger time series @cite_51 , the application of visualisations that map time to a spatial dimension is limited to a moderately large number of time stamps. Moreover, to the best of our knowledge, the effects of the chronological order of edges on the causal topology has not been considered in static visualisations of dynamic graphs.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_28", "@cite_54", "@cite_27", "@cite_2", "@cite_31", "@cite_51" ], "mid": [ "1831799535", "2287322623", "2255903209", "2106268337" ], "abstract": [ "Graph drawing algorithms have classically addressed the layout of static graphs. However, the need to draw evolving or dynamic graphs has brought into question many of the assumptions, conventions and layout methods designed to date. For example, social scientists studying evolving social networks have created a demand for visual representations of graphs changing over time. Two common approaches to represent temporal information in graphs include animation of the network and use of static snapshots of the network at different points in time. Here, we report on two experiments, one in a laboratory environment and another using an asynchronous remote web-based platform, Mechanical Turk, to compare the efficiency of animated displays versus static displays. Four tasks are studied with each visual representation, where two characterise overview level information presentation, and two characterise micro level analytical tasks. For the tasks studied in these experiments and within the limits of the experimental system, the results of this study indicate that static representations are generally more effective particularly in terms of time performance, when compared to fully animated movie representations of dynamic networks.", "Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.", "Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between en- tities in readable, scalable, and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publica- tions. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. Finally, we identify and discuss challenges for future research.", "We present a novel dynamic graph visualization technique based on node-link diagrams. The graphs are drawn side-byside from left to right as a sequence of narrow stripes that are placed perpendicular to the horizontal time line. The hierarchically organized vertices of the graphs are arranged on vertical, parallel lines that bound the stripes; directed edges connect these vertices from left to right. To address massive overplotting of edges in huge graphs, we employ a splatting approach that transforms the edges to a pixel-based scalar field. This field represents the edge densities in a scalable way and is depicted by non-linear color mapping. The visualization method is complemented by interaction techniques that support data exploration by aggregation, filtering, brushing, and selective data zooming. Furthermore, we formalize graph patterns so that they can be interactively highlighted on demand. A case study on software releases explores the evolution of call graphs extracted from the JUnit open source software project. In a second application, we demonstrate the scalability of our approach by applying it to a bibliography dataset containing more than 1.5 million paper titles from 60 years of research history producing a vast amount of relations between title words." ] }
1908.05947
2968302391
Style is ubiquitous in our daily language uses, while what is language style to learning machines? In this paper, by exploiting the second-order statistics of semantic vectors of different corpora, we present a novel perspective on this question via style matrix, i.e. the covariance matrix of semantic vectors, and explain for the first time how Sequence-to-Sequence models encode style information innately in its semantic vectors. As an application, we devise a learning-free text style transfer algorithm, which explicitly constructs a pair of transfer operators from the style matrices for style transfer. Moreover, our algorithm is also observed to be flexible enough to transfer out-of-domain sentences. Extensive experimental evidence justifies the informativeness of style matrix and the competitive performance of our proposed style transfer algorithm with the state-of-the-art methods.
Similarly, image style transfer aims to reconstruct an image with some characters of the style image while preserving its content. The groundbreaking works proposed by @cite_0 @cite_18 show that the Gram matrices (or covariance matrices) of the feature maps, which are extracted by a frozen convolution neural network trained on classification task, are able to capture the visual style of an image. After that numerous works have been developed by matching the Gram matrices and @cite_16 theoretically proves that it's equivalent to minimize the Maximum Mean Discrepancy with the second order polynomial kernel. Now that @cite_18 synthesize the target image through iterative optimization which is inefficient and time-consuming, @cite_6 @cite_11 propose to train a feed-forward network per style and significantly speed up the image reconstruction. And @cite_2 @cite_14 further improve the flexibility and efficiency by incorporating multiple styles with only one network. In order to reconstruct images to arbitrary styles(unseen at the training stage) with a single network forward pass, @cite_1 proposes whitening and coloring transformations to directly match the feature covariance to the style image at intermediate layers of a pre-trained auto-encoder network.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_16", "@cite_11" ], "mid": [ "2740729727", "2962772087", "2949848065", "2740546229" ], "abstract": [ "Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.", "Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring.", "Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multi-style or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSG-Net), which achieves real-time performance. We also employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionally-strided convolution. Our method has achieved superior image quality comparing to state-of-the-art approaches. The proposed MSG-Net as a general approach for real-time style transfer is compatible with most existing techniques including content-style interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feed-forward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available.", "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results." ] }
1908.05750
2968998393
3D Human Motion Indexing and Retrieval is an interesting problem due to the rise of several data-driven applications aimed at analyzing and or re-utilizing 3D human skelet al data, such as data-driven animation, analysis of sports bio-mechanics, human surveillance etc. Spatio-temporal articulations of humans, noisy missing data, different speeds of the same motion etc. make it challenging and several of the existing state of the art methods use hand-craft features along with optimization based or histogram based comparison in order to perform retrieval. Further, they demonstrate it only for very small datasets and few classes. We make a case for using a learned representation that should recognize the motion as well as enforce a discriminative ranking. To that end, we propose, a 3D human motion descriptor learned using a deep network. Our learned embedding is generalizable and applicable to real-world data - addressing the aforementioned challenges and further enables sub-motion searching in its embedding space using another network. Our model exploits the inter-class similarity using trajectory cues, and performs far superior in a self-supervised setting. State of the art results on all these fronts is shown on two large scale 3D human motion datasets - NTU RGB+D and HDM05.
Most approaches on the 3D human motion retrieval have focused on developing hand crafted features to represent the skeleton sequences @cite_17 @cite_19 @cite_4 . In this section, we broadly categorize them by the method in which they engineer their descriptors.
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_17" ], "mid": [ "2021150171", "1537787403", "2048821851", "2467634805" ], "abstract": [ "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skelet al feature, referred to as skelet al quad. Further, the use of a Fisher kernel representation is suggested to describe the skelet al quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues.", "This paper introduces a new model-based approach for simultaneously reconstructing 3D human motion and full-body skelet al size from a small set of 2D image features tracked from uncalibrated monocular video sequences. The key idea of our approach is to construct a generative human motion model from a large set of preprocessed human motion examples to constrain the solution space of monocular human motion tracking. In addition, we learn a generative skeleton model from prerecorded human skeleton data to reduce ambiguity of the human skeleton reconstruction. We formulate the reconstruction process in a nonlinear optimization framework by continuously deforming the generative models to best match a small set of 2D image features tracked from a monocular video sequence. We evaluate the performance of our system by testing the algorithm on a variety of uncalibrated monocular video sequences.", "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "In this paper, a new skeleton-based approach is proposed for 3D hand gesture recognition. Specifically, we exploit the geometric shape of the hand to extract an effective descriptor from hand skeleton connected joints returned by the Intel RealSense depth camera. Each descriptor is then encoded by a Fisher Vector representation obtained using a Gaussian Mixture Model. A multi-level representation of Fisher Vectors and other skeleton-based geometric features is guaranteed by a temporal pyramid to obtain the final feature vector, used later to achieve the classification by a linear SVM classifier. The proposed approach is evaluated on a challenging hand gesture dataset containing 14 gestures, performed by 20 participants performing the same gesture with two different numbers of fingers. Experimental results show that our skeleton-based approach consistently achieves superior performance over a depth-based approach." ] }
1908.05750
2968998393
3D Human Motion Indexing and Retrieval is an interesting problem due to the rise of several data-driven applications aimed at analyzing and or re-utilizing 3D human skelet al data, such as data-driven animation, analysis of sports bio-mechanics, human surveillance etc. Spatio-temporal articulations of humans, noisy missing data, different speeds of the same motion etc. make it challenging and several of the existing state of the art methods use hand-craft features along with optimization based or histogram based comparison in order to perform retrieval. Further, they demonstrate it only for very small datasets and few classes. We make a case for using a learned representation that should recognize the motion as well as enforce a discriminative ranking. To that end, we propose, a 3D human motion descriptor learned using a deep network. Our learned embedding is generalizable and applicable to real-world data - addressing the aforementioned challenges and further enables sub-motion searching in its embedding space using another network. Our model exploits the inter-class similarity using trajectory cues, and performs far superior in a self-supervised setting. State of the art results on all these fronts is shown on two large scale 3D human motion datasets - NTU RGB+D and HDM05.
For the task of retrieval, @cite_2 proposed a simple auto-encoder that captures high-level features. However, their model doesn't explicitly use a temporal construct for motion data. Primarily, learnable representations from 3D motion data have been used for other tasks. @cite_18 @cite_14 are a few amongst many who used deep learning models for 3D motion recognition. Similarly, @cite_9 adopts a unidirectional LSTM to encode the skeleton frames within the hidden network states and learn what subsequences of encoded frames belong to the specified action classes.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_14", "@cite_2" ], "mid": [ "2950007497", "2057232399", "2606517404", "2175030374" ], "abstract": [ "Motion capture data digitally represent human movements by sequences of 3D skeleton configurations. Such spatio-temporal data, often recorded in the stream-based nature, need to be efficiently processed to detect high-interest actions, for example, in human-computer interaction to understand hand gestures in real time. Alternatively, automatically annotated parts of a continuous stream can be persistently stored to become searchable, and thus reusable for future retrieval or pattern mining. In this paper, we focus on multi-label detection of user-specified actions in unsegmented sequences as well as continuous streams. In particular, we utilize the current advances in recurrent neural networks and adopt a unidirectional LSTM model to effectively encode the skeleton frames within the hidden network states. The model learns what subsequences of encoded frames belong to the specified action classes within the training phase. The learned representations of classes are then employed within the annotation phase to infer the probability that an incoming skeleton frame belongs to a given action class. The computed probabilities are finally compared against a learned threshold to automatically determine the beginnings and endings of actions. To further enhance the annotation accuracy, we utilize a bidirectional LSTM model to estimate class probabilities by considering not only the past frames but also the future ones. We extensively evaluate both the models on the three use cases of real-time stream annotation, offline annotation of long sequences, and early action detection and prediction. The experiments demonstrate that our models outperform the state of the art in effectiveness and are at least one order of magnitude more efficient, being able to annotate 10 k frames per second.", "We describe a new approach to transfer knowledge across views for action recognition by using examples from a large collection of unlabelled mocap data. We achieve this by directly matching purely motion based features from videos to mocap. Our approach recovers 3D pose sequences without performing any body part tracking. We use these matches to generate multiple motion projections and thus add view invariance to our action recognition model. We also introduce a closed form solution for approximate non-linear Circulant Temporal Encoding (nCTE), which allows us to efficiently perform the matches in the frequency domain. We test our approach on the challenging unsupervised modality of the IXMAS dataset, and use publicly available motion capture data for matching. Without any additional annotation effort, we are able to significantly outperform the current state of the art.", "We propose a new architecture for the learning of predictive spatio-temporal motion models from data alone. Our approach, dubbed the Dropout Autoencoder LSTM (DAELSTM), is capable of synthesizing natural looking motion sequences over long-time horizons1 without catastrophic drift or motion degradation. The model consists of two components, a 3-layer recurrent neural network to model temporal aspects and a novel autoencoder that is trained to implicitly recover the spatial structure of the human skeleton via randomly removing information about joints during training. This Dropout Autoencoder (DAE) is then used to filter each predicted pose by a 3-layer LSTM network, reducing accumulation of correlated error and hence drift over time. Furthermore to alleviate insufficiency of commonly used quality metric, we propose a new evaluation protocol using action classifiers to assess the quality of synthetic motion sequences. The proposed protocol can be used to assess quality of generated sequences of arbitrary length. Finally, we evaluate our proposed method on two of the largest motion-capture datasets available and show that our model outperforms the state-of-the-art techniques on a variety of actions, including cyclic and acyclic motion, and that it can produce natural looking sequences over longer time horizons than previous methods.", "We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
Fair allocation of indivisible goods has received considerable attention from the research community, especially in the last few years. We refer to surveys by @cite_7 , @cite_10 , and @cite_14 for an overview of recent developments in the area.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_7" ], "mid": [ "2403056628", "2619200939", "2949883232", "2949121341" ], "abstract": [ "The fair division of indivisible goods has long been an important topic in economics and, more recently, computer science. We investigate the existence of envyfree allocations of indivisible goods, that is, allocations where each player values her own allocated set of goods at least as highly as any other player's allocated set of goods. Under additive valuations, we show that even when the number of goods is larger than the number of agents by a linear fraction, envy-free allocations are unlikely to exist.We then show that when the number of goods is larger by a logarithmic factor, such allocations exist with high probability. We support these results experimentally and show that the asymptotic behavior of the theory holds even when the number of goods and agents is quite small. We demonstrate that there is a sharp phase transition from nonexistence to existence of envy-free allocations, and that on average the computational problem is hardest at that transition.", "We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.", "The paper considers fair allocation of indivisible nondisposable items that generate disutility (chores). We assume that these items are placed in the vertices of a graph and each agent’s share has to form a connected subgraph of this graph. Although a similar model has been investigated before for goods, we show that the goods and chores settings are inherently different. In particular, it is impossible to derive the solution of the chores instance from the solution of its naturally associated fair division instance. We consider three common fair division solution concepts, namely proportionality, envy-freeness and equitability, and two individual disutility aggregation functions: additive and maximum based. We show that deciding the existence of a fair allocation is hard even if the underlying graph is a path or a star. We also present some efficiently solvable special cases for these graph topologies.", "We study the problem of fair allocation for indivisible goods. We use the the maxmin share paradigm introduced by Budish as a measure for fairness. Procacciafirst (EC'14) were first to investigate this fundamental problem in the additive setting. In contrast to what real-world experiments suggest, they show that a maxmin guarantee (1- @math allocation) is not always possible even when the number of agents is limited to 3. While the existence of an approximation solution (e.g. a @math - @math allocation) is quite straightforward, improving the guarantee becomes subtler for larger constants. Procaccia provide a proof for existence of a @math - @math allocation and leave the question open for better guarantees. Our main contribution is an answer to the above question. We improve the result of ! to a @math factor in the additive setting. The main idea for our @math - @math allocation method is clustering the agents. To this end, we introduce three notions and techniques, namely reducibility, matching allocation, and cycle-envy-freeness, and prove the approximation guarantee of our algorithm via non-trivial applications of these techniques. Our analysis involves coloring and double counting arguments that might be of independent interest. One major shortcoming of the current studies on fair allocation is the additivity assumption on the valuations. We alleviate this by extending our results to the case of submodular, fractionally subadditive, and subadditive settings. More precisely, we give constant approximation guarantees for submodular and XOS agents, and a logarithmic approximation for the case of subadditive agents. Furthermore, we complement our results by providing close upper bounds for each class of valuation functions. Finally, we present algorithms to find such allocations for additive, submodular, and XOS settings in polynomial time." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
The papers most closely related to ours are the two papers that we mentioned, by @cite_12 and @cite_5 . showed that for any number of agents with additive valuations, there always exists an allocation that gives every agent her maximin share when the graph is a tree, but not necessarily when the graph is a cycle. It is important to note that their maximin share notion corresponds to our G-MMS notion and is defined based on the graph, with only connected allocations with respect to that graph taken into account in an agent's calculation. As an example of a consequence, even though a cycle permits strictly more connected allocations than a path, it offers less guarantee in terms of the G-MMS. Our approach of considering the (complete-graph) MMS allows us to directly compare the guarantees that can be obtained for different graphs.
{ "cite_N": [ "@cite_5", "@cite_12" ], "mid": [ "2592030152", "2949121341", "1137385233", "1542025417" ], "abstract": [ "We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share, that is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focussed on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with (2015) have shown that under additive valuations a 2 3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee. Furthermore, we initiate the study of approximate maximin fair division under submodular valuations. Specifically, we show that when the valuations of the agents are nonnegative, monotone, and submodular, then a 1 10-approximate maximin fair allocation is guaranteed to exist. In fact, we show that such an allocation can be efficiently found by using a simple round-robin algorithm. A technical contribution of the paper is to analyze the performance of this combinatorial algorithm by employing the concept of multilinear extensions.", "We study the problem of fair allocation for indivisible goods. We use the the maxmin share paradigm introduced by Budish as a measure for fairness. Procacciafirst (EC'14) were first to investigate this fundamental problem in the additive setting. In contrast to what real-world experiments suggest, they show that a maxmin guarantee (1- @math allocation) is not always possible even when the number of agents is limited to 3. While the existence of an approximation solution (e.g. a @math - @math allocation) is quite straightforward, improving the guarantee becomes subtler for larger constants. Procaccia provide a proof for existence of a @math - @math allocation and leave the question open for better guarantees. Our main contribution is an answer to the above question. We improve the result of ! to a @math factor in the additive setting. The main idea for our @math - @math allocation method is clustering the agents. To this end, we introduce three notions and techniques, namely reducibility, matching allocation, and cycle-envy-freeness, and prove the approximation guarantee of our algorithm via non-trivial applications of these techniques. Our analysis involves coloring and double counting arguments that might be of independent interest. One major shortcoming of the current studies on fair allocation is the additivity assumption on the valuations. We alleviate this by extending our results to the case of submodular, fractionally subadditive, and subadditive settings. More precisely, we give constant approximation guarantees for submodular and XOS agents, and a logarithmic approximation for the case of subadditive agents. Furthermore, we complement our results by providing close upper bounds for each class of valuation functions. Finally, we present algorithms to find such allocations for additive, submodular, and XOS settings in polynomial time.", "The fairness notion of maximin share (MMS) guarantee underlies a deployed algorithm for allocating indivisible goods under additive valuations. Our goal is to understand when we can expect to be able to give each player his MMS guarantee. Previous work has shown that such an MMS allocation may not exist, but the counterexample requires a number of goods that is exponential in the number of players; we give a new construction that uses only a linear number of goods. On the positive side, we formalize the intuition that these counterexamples are very delicate by designing an algorithm that provably finds an MMS allocation with high probability when valuations are drawn at random.", "We study the problem of computing maximin share allocations, a recently introduced fairness notion. Given a set of n agents and a set of goods, the maximin share of an agent is the best she can guarantee to herself, if she is allowed to partition the goods in any way she prefers, into n bundles, and then receive her least desirable bundle. The objective then is to find a partition, where each agent is guaranteed her maximin share. Such allocations do not always exist, hence we resort to approximation algorithms. Our main result is a 2 3-approximation that runs in polynomial time for any number of agents and goods. This improves upon the algorithm of Procaccia and Wang (2014), which is also a 2 3-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of the algorithm in Procaccia and Wang (2014), exploiting the construction of carefully selected matchings in a bipartite graph representation of the problem. Furthermore, motivated by the apparent difficulty in establishing lower bounds, we undertake a probabilistic analysis. We prove that in randomly generated instances, maximin share allocations exist with high probability. This can be seen as a justification of previously reported experimental evidence. Finally, we provide further positive results for two special cases arising from previous works. The first is the intriguing case of three agents, where we provide an improved 7 8-approximation. The second case is when all item values belong to 0, 1, 2 , where we obtain an exact algorithm." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
@cite_5 investigated the same model with respect to relaxations of envy-freeness. As we mentioned, they characterized the set of graphs for which EF1 can be guaranteed in the case of two agents with arbitrary monotonic valuations. Moreover, they showed that an EF1 allocation always exists on a path for @math . Intriguingly, the existence question for @math remains open, although they showed that an EF2 allocation can be guaranteed for any @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "2889292676", "2949121341", "2555059222", "2964250178" ], "abstract": [ "We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs @math that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can be check in linear time whether a graph belongs to this class, and if so return an EF1 allocation.", "We study the problem of fair allocation for indivisible goods. We use the the maxmin share paradigm introduced by Budish as a measure for fairness. Procacciafirst (EC'14) were first to investigate this fundamental problem in the additive setting. In contrast to what real-world experiments suggest, they show that a maxmin guarantee (1- @math allocation) is not always possible even when the number of agents is limited to 3. While the existence of an approximation solution (e.g. a @math - @math allocation) is quite straightforward, improving the guarantee becomes subtler for larger constants. Procaccia provide a proof for existence of a @math - @math allocation and leave the question open for better guarantees. Our main contribution is an answer to the above question. We improve the result of ! to a @math factor in the additive setting. The main idea for our @math - @math allocation method is clustering the agents. To this end, we introduce three notions and techniques, namely reducibility, matching allocation, and cycle-envy-freeness, and prove the approximation guarantee of our algorithm via non-trivial applications of these techniques. Our analysis involves coloring and double counting arguments that might be of independent interest. One major shortcoming of the current studies on fair allocation is the additivity assumption on the valuations. We alleviate this by extending our results to the case of submodular, fractionally subadditive, and subadditive settings. More precisely, we give constant approximation guarantees for submodular and XOS agents, and a logarithmic approximation for the case of subadditive agents. Furthermore, we complement our results by providing close upper bounds for each class of valuation functions. Finally, we present algorithms to find such allocations for additive, submodular, and XOS settings in polynomial time.", "We study cake cutting on a graph, where agents can only evaluate their shares relative to their neighbors. This is an extension of the classical problem of fair division to incorporate the notion of social comparison from the social sciences. We say an allocation is locally envy-free if no agent envies a neighbor's allocation, and locally proportional if each agent values its own allocation as much as the average value of its neighbors' allocations. We generalize the classical Cut and Choose\" protocol for two agents to this setting, by fully characterizing the set of graphs for which an oblivious single-cutter protocol can give locally envy-free (thus also locally-proportional) allocations. We study the price of envy-freeness , which compares the total value of an optimal allocation with that of an optimal, locally envy-free allocation. Surprisingly, a lower bound of @math on the price of envy-freeness for global allocations also holds for local envy-freeness in any connected graph, so sparse graphs do not provide more flexibility asymptotically with respect to the quality of envy-free allocations.", "The goal of fair division is to distribute resources among competing players in a \"fair\" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
Besides @cite_12 and @cite_5 , a number of other authors have recently studied fairness under connectivity constraints. @cite_4 investigated maximin share fairness in the case of cycles, also concentrating on the G-MMS notion, while @cite_9 focused on paths and provided approximations of envy-freeness, proportionality, as well as another fairness notion called equitability. @cite_20 considered fairness in conjunction with the economic efficiency notion of Pareto optimality. @cite_15 studied the problem of , where all items yield disutility to the agents, and gave complexity results on deciding the existence of envy-free, proportional, and equitable allocations for paths and stars.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_5", "@cite_15", "@cite_12", "@cite_20" ], "mid": [ "2964250178", "2971548201", "2788938373", "2949883232" ], "abstract": [ "The goal of fair division is to distribute resources among competing players in a \"fair\" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.", "Abstract We study the problem of fairly allocating indivisible goods to groups of agents. Agents in the same group share the same set of goods even though they may have different preferences. Previous work has focused on unanimous fairness, in which all agents in each group must agree that their group's share is fair. Under this strict requirement, fair allocations exist only for small groups. We introduce the concept of democratic fairness, which aims to satisfy a certain fraction of the agents in each group. This concept is better suited to large groups such as cities or countries. We present protocols for democratic fair allocation among two or more arbitrarily large groups of agents with monotonic, additive, or binary valuations. For two groups with arbitrary monotonic valuations, we give an efficient protocol that guarantees envy-freeness up to one good for at least 1 2 of the agents in each group, and prove that the 1 2 fraction is optimal. We also present other protocols that make weaker fairness guarantees to more agents in each group, or to more groups. Our protocols combine techniques from different fields, including combinatorial game theory, cake cutting, and voting.", "In the context of fair allocation of indivisible items, fairness concepts often compare the satisfaction of an agent to the satisfaction she would have from items that are not allocated to her: in particular, envy-freeness requires that no agent prefers the share of someone else to her own share. We argue that these notions could also be defined relative to the knowledge that an agent has on how the items that she does not receive are distributed among other agents. We define a family of epistemic notions of envy-freeness, parameterized by a social graph, where an agent observes the share of her neighbours but not of her non-neighbours. We also define an intermediate notion between envy-freeness and proportionality, also parameterized by a social graph. These weaker notions of envy-freeness are useful when seeking a fair allocation, since envy-freeness is often too strong. We position these notions with respect to known ones, thus revealing new rich hierarchies of fairness concepts. Finally, we present a very general framework that covers all the existing and many new fairness concepts.", "The paper considers fair allocation of indivisible nondisposable items that generate disutility (chores). We assume that these items are placed in the vertices of a graph and each agent’s share has to form a connected subgraph of this graph. Although a similar model has been investigated before for goods, we show that the goods and chores settings are inherently different. In particular, it is impossible to derive the solution of the chores instance from the solution of its naturally associated fair division instance. We consider three common fair division solution concepts, namely proportionality, envy-freeness and equitability, and two individual disutility aggregation functions: additive and maximum based. We show that deciding the existence of a fair allocation is hard even if the underlying graph is a path or a star. We also present some efficiently solvable special cases for these graph topologies." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
A related line of work also combines graphs with resource allocation, but uses graphs to capture the connection between agents instead of goods. In particular, a graph specifies the acquaintance relationship among agents. @cite_1 and @cite_2 defined graph-based versions of envy-freeness and proportionality with divisible resources where agents only evaluate their shares relative to other agents with whom they are acquainted. @cite_21 and @cite_23 studied the graph-based version of envy-freeness with indivisible goods. @cite_13 introduced a number of fairness notions parameterized by the acquaintance graph.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_23", "@cite_2", "@cite_13" ], "mid": [ "2555059222", "2949883232", "2951250707", "2788938373" ], "abstract": [ "We study cake cutting on a graph, where agents can only evaluate their shares relative to their neighbors. This is an extension of the classical problem of fair division to incorporate the notion of social comparison from the social sciences. We say an allocation is locally envy-free if no agent envies a neighbor's allocation, and locally proportional if each agent values its own allocation as much as the average value of its neighbors' allocations. We generalize the classical Cut and Choose\" protocol for two agents to this setting, by fully characterizing the set of graphs for which an oblivious single-cutter protocol can give locally envy-free (thus also locally-proportional) allocations. We study the price of envy-freeness , which compares the total value of an optimal allocation with that of an optimal, locally envy-free allocation. Surprisingly, a lower bound of @math on the price of envy-freeness for global allocations also holds for local envy-freeness in any connected graph, so sparse graphs do not provide more flexibility asymptotically with respect to the quality of envy-free allocations.", "The paper considers fair allocation of indivisible nondisposable items that generate disutility (chores). We assume that these items are placed in the vertices of a graph and each agent’s share has to form a connected subgraph of this graph. Although a similar model has been investigated before for goods, we show that the goods and chores settings are inherently different. In particular, it is impossible to derive the solution of the chores instance from the solution of its naturally associated fair division instance. We consider three common fair division solution concepts, namely proportionality, envy-freeness and equitability, and two individual disutility aggregation functions: additive and maximum based. We show that deciding the existence of a fair allocation is hard even if the underlying graph is a path or a star. We also present some efficiently solvable special cases for these graph topologies.", "We consider fair allocation of indivisible items under an additional constraint: there is an undirected graph describing the relationship between the items, and each agent's share must form a connected subgraph of this graph. This framework captures, e.g., fair allocation of land plots, where the graph describes the accessibility relation among the plots. We focus on agents that have additive utilities for the items, and consider several common fair division solution concepts, such as proportionality, envy-freeness and maximin share guarantee. While finding good allocations according to these solution concepts is computationally hard in general, we design efficient algorithms for special cases where the underlying graph has simple structure, and or the number of agents -or, less restrictively, the number of agent types- is small. In particular, despite non-existence results in the general case, we prove that for acyclic graphs a maximin share allocation always exists and can be found efficiently.", "In the context of fair allocation of indivisible items, fairness concepts often compare the satisfaction of an agent to the satisfaction she would have from items that are not allocated to her: in particular, envy-freeness requires that no agent prefers the share of someone else to her own share. We argue that these notions could also be defined relative to the knowledge that an agent has on how the items that she does not receive are distributed among other agents. We define a family of epistemic notions of envy-freeness, parameterized by a social graph, where an agent observes the share of her neighbours but not of her non-neighbours. We also define an intermediate notion between envy-freeness and proportionality, also parameterized by a social graph. These weaker notions of envy-freeness are useful when seeking a fair allocation, since envy-freeness is often too strong. We position these notions with respect to known ones, thus revealing new rich hierarchies of fairness concepts. Finally, we present a very general framework that covers all the existing and many new fairness concepts." ] }
1908.05552
2968730564
Musculoskelet al robots that are based on pneumatic actuation have a variety of properties, such as compliance and back-drivability, that render them particularly appealing for human-robot collaboration. However, programming interactive and responsive behaviors for such systems is extremely challenging due to the nonlinearity and uncertainty inherent to their control. In this paper, we propose an approach for learning Bayesian Interaction Primitives for musculoskelet al robots given a limited set of example demonstrations. We show that this approach is capable of real-time state estimation and response generation for interaction with a robot for which no analytical model exists. Human-robot interaction experiments on a 'handshake' task show that the approach generalizes to new positions, interaction partners, and movement velocities.
Robots with pneumatic artificial muscles (PAMs) and compliant limbs have been shown to be desirable for human-robot interaction scenarios @cite_22 @cite_13 . When configured in an anthropomorphic musculoskelet al structure, such robots provide an intriguing platform for human-robot interaction (HRI) @cite_4 due to their potential to generate human-like motions while offering a degree of safety as a result of their compliance when confronted with an external force -- such as contact with a human. Recent work @cite_16 has shown the value of utilizing McKibben actuators in the design of these robots, due to their inherent compliance and inexpensive material cost. However, while analytical kinematics models are in theory possible @cite_20 @cite_3 , they are not always practical due to the effects of friction and the deterioration of mechanical elements, which are difficult to account for (although some gains have been made in this area @cite_15 ). Subsequently, this work proposes using a method based on learning from demonstration @cite_7 @cite_21 , which is a well-established methodology for teaching robots complex motor skills based on observed data.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_21", "@cite_3", "@cite_15", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2869878434", "2197436471", "2537120235", "2101027380" ], "abstract": [ "ABSTRACTThis paper describes the construction and design decisions of an anthropomorphic musculoskelet al robot arm actuated by pneumatic artificial muscles. This robot was designed to allow human-inspired compliant movements without the need to replicate the human body-structure in detail. This resulted in an mechanically simple design while preserving the motoric characteristics of a human. Besides the constructional details of the robot we will present two experiments to show the robots abilities regarding to its dexterity and compliance.", "This paper demonstrates a method for simultaneous transfer of positional and force requirements for in-contact tasks from a human instructor to a robotic arm through kinesthetic teaching. This is achieved by a specific use of the sensory configuration, where a force torque sensor is mounted between the tool and the flange of a robotic arm endowed with integrated torque sensors at each joint. The human demonstration is modeled using Dynamic Movement Primitives. Following human demonstration, the robot arm is provided with the capacity to perform sequential in-contact tasks, for example writing on a notepad a previously demonstrated sequence of characters. During the reenactment of the task, the system is not only able to imitate and generalize from demonstrated trajectories, but also from their associated force profiles. In fact, the implemented framework is extended to successfully recover from perturbations of the trajectory during reenactment and to cope with dynamic environments.", "Human-scale mobile robots with arms have the potential to assist people with a variety of tasks. We present a proof-of-concept system that has enabled a person with severe quadriplegia named Henry Evans to shave himself in his own home using a general purpose mobile manipulator (PR2 from Willow Garage). The robot primarily provides assistance by holding a tool (e.g., an electric shaver) at user-specified locations around the user's head, while he she moves his her head against it. If the robot detects forces inappropriate for the task (e.g., shaving), it withdraws the tool. The robot also holds a mirror with its other arm, so that the user can see what he she is doing. For all aspects of the task, the robot and the human work together. The robot uses a series of distinct semi-autonomous subsystems during the task to navigate to poses next to the wheelchair, attain initial arm configurations, register a 3D model of the person's head, move the tool to coarse semantically-labeled tool poses (e.g, “Cheek”), and finely position the tool via incremental movements. Notably, while moving the tool near the user's head, the robot uses an ellipsoidal coordinate system attached to the 3D head model. In addition to describing the complete robotic system, we report results from Henry Evans using it to shave both sides of his face while sitting in his wheelchair at home. He found the process to be long (54 minutes) and the interface unintuitive. Yet, he also found the system to be comfortable to use, felt safe while using it, was satisfied with it, and preferred it to a human caregiver.", "In this paper we describe and practically demonstrate a robotic arm hand system that is controlled in realtime in 6D Cartesian space through measured human muscular activity. The soft-robotics control architecture of the robotic system ensures safe physical human robot interaction as well as stable behaviour while operating in an unstructured environment. Muscular control is realised via surface electromyography, a non-invasive and simple way to gather human muscular activity from the skin. A standard supervised machine learning system is used to create a map from muscle activity to hand position, orientation and grasping force which then can be evaluated in real time—the existence of such a map is guaranteed by gravity compensation and low-speed movement. No kinematic or dynamic model of the human arm is necessary, which makes the system quickly adaptable to anyone. Numerical validation shows that the system achieves good movement precision. Live evaluation and demonstration of the system during a robotic trade fair is reported and confirms the validity of the approach, which has potential applications in muscle-disorder rehabilitation or in teleoperation where a close-range, safe master slave interaction is required, and or when optical magnetic position tracking cannot be enforced." ] }
1908.05441
2968447920
Prior work has demonstrated that question classification (QC), recognizing the problem domain of a question, can help answer it more accurately. However, developing strong QC algorithms has been hindered by the limited size and complexity of annotated data available. To address this, we present the largest challenge dataset for QC, containing 7,787 science exam questions paired with detailed classification labels from a fine-grained hierarchical taxonomy of 406 problem domains. We then show that a BERT-based model trained on this dataset achieves a large (+0.12 MAP) gain compared with previous methods, while also achieving state-of-the-art performance on benchmark open-domain and biomedical QC datasets. Finally, we show that using this model's predictions of question topic significantly improves the accuracy of a question answering system by +1.7 P@1, with substantial future gains possible as QC performance improves.
Question classification typically makes use of a combination of syntactic, semantic, surface, and embedding methods. Syntactic patterns @cite_25 @cite_37 @cite_28 @cite_7 and syntactic dependencies @cite_2 have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity @cite_10 @cite_34 @cite_19 . Keyword identification helps identify specific terms useful for classification @cite_39 @cite_2 @cite_18 . Similarly, named entity recognizers @cite_5 @cite_32 or lists of semantically related words @cite_5 @cite_19 can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings @cite_17 @cite_35 . Here, we empirically demonstrate many of these existing methods do not transfer to the science domain.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_18", "@cite_7", "@cite_28", "@cite_32", "@cite_39", "@cite_19", "@cite_2", "@cite_5", "@cite_34", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2144108169", "2149048832", "1504212872", "2251074656" ], "abstract": [ "We propose a novel algorithm for inducing semantic taxonomies. Previous algorithms for taxonomy induction have typically focused on independent classifiers for discovering new single relationships based on hand-constructed or automatically discovered textual patterns. By contrast, our algorithm flexibly incorporates evidence from multiple classifiers over heterogenous relationships to optimize the entire structure of the taxonomy, using knowledge of a word's coordinate terms to help in determining its hypernyms, and vice versa. We apply our algorithm on the problem of sense-disambiguated noun hyponym acquisition, where we combine the predictions of hypernym and coordinate term classifiers with the knowledge in a preexisting semantic taxonomy (WordNet 2.1). We add 10,000 novel synsets to WordNet 2.1 at 84 precision, a relative error reduction of 70 over a non-joint algorithm using the same component classifiers. Finally, we show that a taxonomy built using our algorithm shows a 23 relative F-score improvement over WordNet 2.1 on an independent testset of hypernym pairs.", "To respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.", "Objective : Development of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms. @PARASPLIT Design : The natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain. @PARASPLIT Measurements : The impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision. @PARASPLIT Results : Without training specific to the four diseases, recall and precision of the system(combined effect of the processor and query generator) were 70 and 87 . Training of the query component increased recall to 85 without changing precision.", "Relational phrases (e.g., “got married to”) and their hypernyms (e.g., “is a relative of”) are central for many tasks including question answering, open information extraction, paraphrasing, and entailment detection. This has motivated the development of several linguistic resources (e.g. DIRT, PATTY, and WiseNet) which systematically collect and organize relational phrases. These resources have demonstrable practical benefits, but are each limited due to noise, sparsity, or size. We present a new general-purpose method, RELLY, for constructing a large hypernymy graph of relational phrases with high-quality subsumptions using collective probabilistic programming techniques. Our graph induction approach integrates small highprecision knowledge bases together with large automatically curated resources, and reasons collectively to combine these resources into a consistent graph. Using RELLY, we construct a high-coverage, high-precision hypernymy graph consisting of 20K relational phrases and 35K hypernymy links. Our evaluation indicates a hypernymy link precision of 78 , and demonstrates the value of this resource for a document-relevance ranking task." ] }
1908.05146
2968782052
Real-time 3D reconstruction from RGB-D sensor data plays an important role in many robotic applications, such as object modeling and mapping. The popular method of fusing depth information into a truncated signed distance function (TSDF) and applying the marching cubes algorithm for mesh extraction has severe issues with thin structures: not only does it lead to loss of accuracy, but it can generate completely wrong surfaces. To address this, we propose the directional TSDF - a novel representation that stores opposite surfaces separate from each other. The marching cubes algorithm is modified accordingly to retrieve a coherent mesh representation. We further increase the accuracy by using surface gradient-based ray casting for fusing new measurements. We show that our method outperforms state-of-the-art TSDF reconstruction algorithms in mesh accuracy.
Surface reconstruction from range data has been an active research topic for a long time. It gained in popularity through the availability of affordable depth cameras and parallel computing hardware. Zollhöfer al @cite_10 give a comprehensive overview on modern 3D reconstruction from RGB-D data. The two main streams or research are TSDF fusion @cite_25 @cite_1 and surfel extraction @cite_4 @cite_6 . TSDF-based methods make up the majority, due to their simplicity and mesh output. Surfels, however, maintain the surface and observation direction in form of a normal per surfel. Thus they can distinguish observations from different sides. Another interesting approach is presented by Sch " @cite_17 , who triangulate surfels to create a mesh representation.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_6", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2892904655", "2020429267", "2071906076", "832925222" ], "abstract": [ "We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system). In contrast to most existing approaches, we do not fuse depth measurements in a volume but in a dense surfel cloud. We asynchronously (re)triangulate the smoothed surfels to reconstruct a surface mesh. This novel approach enables to maintain a dense surface representation of the scene during SLAM which can quickly adapt to loop closures. This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera's resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin objects since objects do not need to enclose a volume. We demonstrate our approach in a number of experiments, showing that it produces reconstructions that are competitive with the state-of-the-art, and we discuss its advantages and limitations. The algorithm (excluding loop closure functionality) is available as open source at this https URL.", "We present a novel method to obtain fine-scale detail in 3D reconstructions generated with low-budget RGB-D cameras or other commodity scanning devices. As the depth data of these sensors is noisy, truncated signed distance fields are typically used to regularize out the noise, which unfortunately leads to over-smoothed results. In our approach, we leverage RGB data to refine these reconstructions through shading cues, as color input is typically of much higher resolution than the depth data. As a result, we obtain reconstructions with high geometric detail, far beyond the depth resolution of the camera itself. Our core contribution is shading-based refinement directly on the implicit surface representation, which is generated from globally-aligned RGB-D images. We formulate the inverse shading problem on the volumetric distance field, and present a novel objective function which jointly optimizes for fine-scale surface geometry and spatially-varying surface reflectance. In order to enable the efficient reconstruction of sub-millimeter detail, we store and process our surface using a sparse voxel hashing scheme which we augment by introducing a grid hierarchy. A tailored GPU-based Gauss-Newton solver enables us to refine large shape models to previously unseen resolution within only a few seconds.", "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware @math -minimization algorithm by adaptively estimating the @math value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy." ] }
1908.05146
2968782052
Real-time 3D reconstruction from RGB-D sensor data plays an important role in many robotic applications, such as object modeling and mapping. The popular method of fusing depth information into a truncated signed distance function (TSDF) and applying the marching cubes algorithm for mesh extraction has severe issues with thin structures: not only does it lead to loss of accuracy, but it can generate completely wrong surfaces. To address this, we propose the directional TSDF - a novel representation that stores opposite surfaces separate from each other. The marching cubes algorithm is modified accordingly to retrieve a coherent mesh representation. We further increase the accuracy by using surface gradient-based ray casting for fusing new measurements. We show that our method outperforms state-of-the-art TSDF reconstruction algorithms in mesh accuracy.
In contrast to the related works, we are proposing an improved representation based on the TSDF that utilizes the idea from Henry @cite_12 to represent surfaces with different orientations separate from each other. The implementation is based on the work of Dong al @cite_1 , which also serves as a baseline for state-of-the-art methods using voxel projection and the marching cubes algorithm. Also we apply the gradient-based ray casting concept from @cite_5 . The key features of our method are: the directional TSDF representation that divides the modeled volume into six directions, thereby separately representing surfaces with different orientations, a gradient-based ray casting fusion for improved results, a thread-safe parallelization of ray casting fusion, and a modified marching cubes algorithm for mesh extraction from this representation.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_12" ], "mid": [ "2074208271", "2141763411", "2561394090", "832925222" ], "abstract": [ "We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993 in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535 success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993 success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.", "We present the first implementation of a volume ray casting algorithm for tetrahedral meshes running on off-the-shelf programmable graphics hardware. Our implementation avoids the memory transfer bottleneck of the graphics bus since the complete mesh data is stored in the local memory of the graphics adapter and all computations, in particular ray traversal and ray integration, are performed by the graphics processing unit. Analogously to other ray casting algorithms, our algorithm does not require an expensive cell sorting. Provided that the graphics adapter offers enough texture memory, our implementation performs comparable to the fastest published volume rendering algorithms for unstructured meshes. Our approach works with cyclic and or non-convex meshes and supports early ray termination. Accurate ray integration is guaranteed by applying pre-integrated volume rendering. In order to achieve almost interactive modifications of transfer functions, we propose a new method for computing three-dimensional pre-integration tables.", "Truncated Signed Distance Fields (TSDFs) have become a popular tool in 3D reconstruction, as they allow building very high-resolution models of the environment in real-time on GPU. However, they have rarely been used for planning on robotic platforms, mostly due to high computational and memory requirements. We propose to reduce these requirements by using large voxel sizes, and extend the standard TSDF representation to be faster and better model the environment at these scales. We also propose a method to build Euclidean Signed Distance Fields (ESDFs), which are a common representation for planning, incrementally out of our TSDF representation. ESDFs provide Euclidean distance to the nearest obstacle at any point in the map, and also provide collision gradient information for use with optimization-based planners. We validate the reconstruction accuracy and real-time performance of our combined system on both new and standard datasets from stereo and RGB-D imagery. The complete system will be made available as an open-source library called voxblox.", "Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware @math -minimization algorithm by adaptively estimating the @math value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy." ] }
1908.05318
2968312879
We investigate the performance of a jet identification algorithm based on interaction networks (JEDI-net) to identify all-hadronic decays of high-momentum heavy particles produced at the LHC and distinguish them from ordinary jets originating from the hadronization of quarks and gluons. The jet dynamics is described as a set of one-to-one interactions between the jet constituents. Based on a representation learned from these interactions, the jet is associated to one of the considered categories. Unlike other architectures, the JEDI-net models achieve their performance without special handling of the sparse input jet representation, extensive pre-processing, particle ordering, or specific assumptions regarding the underlying detector geometry. The presented models give better results with less model parameters, offering interesting prospects for LHC applications.
Jet tagging is one of the most popular LHC-related tasks to which DL solutions have been applied. Several classification algorithms have been studied in the context of jet tagging at the LHC @cite_47 @cite_1 @cite_5 @cite_10 @cite_2 @cite_12 @cite_43 @cite_13 using DNNs, CNNs, or physics-inspired architectures. Recurrent and recursive layers have been used to construct jet classifiers starting from a list of reconstructed particle momenta @cite_3 @cite_23 @cite_46 . Recently, these different approaches, applied to the specific case of top quark jet identification, have been compared in Ref. @cite_19 . While many of these studies focus on data analysis, work is underway to apply these algorithms in the early stages of LHC real-time event processing, i.e. the trigger system. For example, Ref. @cite_11 focuses on converting these models into firmware for field programmable gate arrays (FPGAs) optimized for low latency (less than 1 @math s). If successful, such a program could allow for a more resource-efficient and effective event selection for future LHC runs.
{ "cite_N": [ "@cite_13", "@cite_46", "@cite_1", "@cite_3", "@cite_43", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_47", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2792435888", "2586557507", "2952645975", "2047792789" ], "abstract": [ "We apply computer vision with deep learning -- in the form of a convolutional neural network (CNN) -- to build a highly effective boosted top tagger. Previous work (the \"DeepTop\" tagger of ) has shown that a CNN-based top tagger can achieve comparable performance to state-of-the-art conventional top taggers based on high-level inputs. Here, we introduce a number of improvements to the DeepTop tagger, including architecture, training, image preprocessing, sample size and color pixels. Our final CNN top tagger outperforms BDTs based on high-level inputs by a factor of @math --3 or more in background rejection, over a wide range of tagging efficiencies and fiducial jet selections. As reference points, we achieve a QCD background rejection factor of 500 (60) at 50 top tagging efficiency for fully-merged (non-merged) top jets with @math in the 800--900 GeV (350--450 GeV) range. Our CNN can also be straightforwardly extended to the classification of other types of jets, and the lessons learned here may be useful to others designing their own deep NNs for LHC applications.", "Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.", "Since the machine learning techniques are improving rapidly, it has been shown that the image recognition techniques in deep neural networks can be used to detect jet substructure. And it turns out that deep neural networks can match or outperform traditional approach of expert features. However, there are disadvantages such as sparseness of jet images. Based on the natural tree-like structure of jet sequential clustering, the recursive neural networks (RecNNs), which embed jet clustering history recursively as in natural language processing, have a better behavior when confronted with these problems. We thus try to explore the performance of RecNN in quark gluon discrimination. In order to indicate the realistic potential at the LHC, We include the detector simulation in our data preparation. We attempt to implement particle flow identification in one-hot vectors or using instead a recursively defined pt-weighted charge. The results show that RecNNs work better than the baseline BDT by a few percent in gluon rejection at the working point of 50 quark acceptance. However, extra implementation of particle flow identification only increases the performance slightly. We also experimented on some relevant aspects which might influence the performance of networks. It shows even only particle flow identification as input feature without any extra information on momentum or angular position is already giving a fairly good result, which indicates that most of the information for q g discrimination is already included in the tree-structure itself. As a bonus, a rough u d discrimination is also explored.", "We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon- initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets." ] }
1908.05217
2967452169
Recent advances in deep learning greatly boost the performance of object detection. State-of-the-art methods such as Faster-RCNN, FPN and R-FCN have achieved high accuracy in challenging benchmark datasets. However, these methods require fully annotated object bounding boxes for training, which are incredibly hard to scale up due to the high annotation cost. Weakly-supervised methods, on the other hand, only require image-level labels for training, but the performance is far below their fully-supervised counterparts. In this paper, we propose a semi-supervised large scale fine-grained detection method, which only needs bounding box annotations of a smaller number of coarse-grained classes and image-level labels of large scale fine-grained classes, and can detect all classes at nearly fully-supervised accuracy. We achieve this by utilizing the correlations between coarse-grained and fine-grained classes with shared backbone, soft-attention based proposal re-ranking, and a dual-level memory module. Experiment results show that our methods can achieve close accuracy on object detection to state-of-the-art fully-supervised methods on two large scale datasets, ImageNet and OpenImages, with only a small fraction of fully annotated classes.
There are only a few research works in the semi-supervised detection field. @cite_35 proposes a LSDA-based method that can handle disjoint set semi-supervised detection, but this method is not end-to-end trainable and cannot be easily extended to state-of-the-art detection frameworks. @cite_20 proposes a semiMIL method on disjoint set semi-supervised detection, which achieves better performance than @cite_35 . Note-RCNN @cite_21 proposes a mining and training scheme for semi-supervised detection, but it needs seed boxes for all categories. YOLO 9000 @cite_41 can also be viewed as a semi-supervised detection framework, but it is no more than a naive combination of detection and classification stream and only relies on the implicit shared feature learning from the network.
{ "cite_N": [ "@cite_41", "@cite_35", "@cite_21", "@cite_20" ], "mid": [ "2405856298", "2606831796", "2951270658", "2798269247" ], "abstract": [ "Supervised contour detection methods usually require many labeled training images to obtain satisfactory performance. However, a large set of annotated data might be unavailable or extremely labor intensive. In this paper, we investigate the usage of semi-supervised learning (SSL) to obtain competitive detection accuracy with very limited training data (three labeled images). Specifically, we propose a semi-supervised structured ensemble learning approach for contour detection built on structured random forests (SRF). To allow SRF to be applicable to unlabeled data, we present an effective sparse representation approach to capture inherent structure in image patches by finding a compact and discriminative low-dimensional subspace representation in an unsupervised manner, enabling the incorporation of abundant unlabeled patches with their estimated structured labels to help SRF perform better node splitting. We re-examine the role of sparsity and propose a novel and fast sparse coding algorithm to boost the overall learning efficiency. To the best of our knowledge, this is the first attempt to apply SSL for contour detection. Extensive experiments on the BSDS500 segmentation dataset and the NYU Depth dataset demonstrate the superiority of the proposed method.", "Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.", "Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.", "Weakly-supervised object detection has attracted much attention lately, since it does not require bounding box annotations for training. Although significant progress has also been made, there is still a large gap in performance between weakly-supervised and fully-supervised object detection. Recently, some works use pseudo ground-truths which are generated by a weakly-supervised detector to train a supervised detector. Such approaches incline to find the most representative parts of objects, and only seek one ground-truth box per class even though many same-class instances exist. To overcome these issues, we propose a weakly-supervised to fully-supervised framework, where a weakly-supervised detector is implemented using multiple instance learning. Then, we propose a pseudo ground-truth excavation (PGE) algorithm to find the pseudo ground-truth of each instance in the image. Moreover, the pseudo ground-truth adaptation (PGA) algorithm is designed to further refine the pseudo ground-truths from PGE. Finally, we use these pseudo ground-truths to train a fully-supervised detector. Extensive experiments on the challenging PASCAL VOC 2007 and 2012 benchmarks strongly demonstrate the effectiveness of our framework. We obtain 52.4 and 47.8 mAP on VOC2007 and VOC2012 respectively, a significant improvement over previous state-of-the-art methods." ] }
1908.05156
2966926636
The spectacular success of Bitcoin and Blockchain Technology in recent years has provided enough evidence that a widespread adoption of a common cryptocurrency system is not merely a distant vision, but a scenario that might come true in the near future. However, the presence of Bitcoin's obvious shortcomings such as excessive electricity consumption, unsatisfying transaction throughput, and large validation time (latency) makes it clear that a new, more efficient system is needed. We propose a protocol in which a set of nodes maintains and updates a linear ordering of transactions that are being submitted by users. Virtually every cryptocurrency system has such a protocol at its core, and it is the efficiency of this protocol that determines the overall throughput and latency of the system. We develop our protocol on the grounds of the well-established field of Asynchronous Byzantine Fault Tolerant (ABFT) systems. This allows us to formally reason about correctness, efficiency, and security in the strictest possible model, and thus convincingly prove the overall robustness of our solution. Our protocol improves upon the state-of-the-art HoneyBadgerBFT by by reducing the asymptotic latency while matching the optimal communication complexity. Furthermore, in contrast to the above, our protocol does not require a trusted dealer thanks to a novel implementation of a trustless ABFT Randomness Beacon.
Atomic Broadcast. For an excellent introduction to the field of Distributed Computing and overview of Atomic Broadcast and Consensus protocols we refer the reader to the book @cite_53 . A more recent work of @cite_6 surveys existing consensus protocols in the context of cryptocurrency systems.
{ "cite_N": [ "@cite_53", "@cite_6" ], "mid": [ "2167100431", "2130264930", "2303620077", "2680467112" ], "abstract": [ "Atomic broadcast is an important communication primitive often used to implement state-machine replication. Despite the large number of atomic broadcast algorithms proposed in the literature, few papers have discussed how to turn these algorithms into efficient executable protocols. Our main contribution, Ring Paxos, is a protocol derived from Paxos. Ring Paxos inherits the reliability of Paxos and can be implemented very efficiently. We report a detailed performance analysis of Ring Paxos and compare it to other atomic broadcast protocols.", "Total order broadcast and multicast (also called atomic broadcast multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.", "Consensus mechanisms for ensuring consistency are some of the most expensive operations in managing large amounts of data. Often, there is a trade off that involves reducing the coordination overhead at the price of accepting possible data loss or inconsistencies. As the demand for more efficient data centers increases, it is important to provide better ways of ensuring consistency without affecting performance. In this paper we show that consensus (atomic broadcast) can be removed from the critical path of performance by moving it to hardware. As a proof of concept, we implement Zookeeper's atomic broadcast at the network level using an FPGA. Our design uses both TCP and an application specific network protocol. The design can be used to push more value into the network, e.g., by extending the functionality of middleboxes or adding inexpensive consensus to in-network processing nodes. To illustrate how this hardware consensus can be used in practical systems, we have combined it with a mainmemory key value store running on specialized microservers (built as well on FPGAs). This results in a distributed service similar to Zookeeper that exhibits high and stable performance. This work can be used as a blueprint for further specialized designs.", "Many distributed systems require coordination between the components involved. With the steady growth of such systems, the probability of failures increases, which necessitates scalable fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Thus, AllConcur is highly competitive with regard to existing solutions and, due to its decentralized approach, enables hitherto unattainable system designs in a variety of fields." ] }
1908.05156
2966926636
The spectacular success of Bitcoin and Blockchain Technology in recent years has provided enough evidence that a widespread adoption of a common cryptocurrency system is not merely a distant vision, but a scenario that might come true in the near future. However, the presence of Bitcoin's obvious shortcomings such as excessive electricity consumption, unsatisfying transaction throughput, and large validation time (latency) makes it clear that a new, more efficient system is needed. We propose a protocol in which a set of nodes maintains and updates a linear ordering of transactions that are being submitted by users. Virtually every cryptocurrency system has such a protocol at its core, and it is the efficiency of this protocol that determines the overall throughput and latency of the system. We develop our protocol on the grounds of the well-established field of Asynchronous Byzantine Fault Tolerant (ABFT) systems. This allows us to formally reason about correctness, efficiency, and security in the strictest possible model, and thus convincingly prove the overall robustness of our solution. Our protocol improves upon the state-of-the-art HoneyBadgerBFT by by reducing the asymptotic latency while matching the optimal communication complexity. Furthermore, in contrast to the above, our protocol does not require a trusted dealer thanks to a novel implementation of a trustless ABFT Randomness Beacon.
In this paper we propose a different assumption on the transaction buffers that allows us to better demonstrate the capabilities of our protocol when it comes to latency. We assume that at every round the ratio between lengths of transaction buffers of any two honest nodes is at most a fixed constant. In this model, our protocol achieves @math latency, while a natural adaptation of HBBFT would achieve latency @math , thus again a factor- @math improvement. A qualitative improvement over HBBFT that we achieve in this paper is that we completely get rid of the trusted dealer assumption. We also note that the definitions of Atomic Broadcast slightly differ between this paper and @cite_5 : we achieve Censorship Resilience assuming that it was input to even a single honest node, while in @cite_5 it has to be input to @math nodes.
{ "cite_N": [ "@cite_5" ], "mid": [ "2517585112", "2043868537", "2591036807", "2047805348" ], "abstract": [ "We study a noiseless broadcast link serving @math users whose requests arise from a library of @math files. Every user is equipped with a cache of size @math files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most @math file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file @math scales to infinity. The asymptotic coding gain obtained is roughly @math . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets @math is a function of the system parameters @math , and @math . Specifically, we show that the existing random placement and clique cover delivery schemes that achieve optimality in the asymptotic regime can have at most a multiplicative gain of 2 even if the number of packets is exponential in the asymptotic gain @math . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of @math is at least @math . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters.", "Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x, y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x, y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors. If Alice and Bob are only interested in computing f(x, y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors. We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors. The same impossibility holds also for sub-constant noise rate, e.g., when c is exponentially small (as a function of the input size).", "We consider distributed plurality consensus in a complete graph of size @math with @math initial opinions. We design an efficient and simple protocol in the asynchronous communication model that ensures that all nodes eventually agree on the initially most frequent opinion. In this model, each node is equipped with a random Poisson clock with parameter @math . Whenever a node's clock ticks, it samples some neighbors, uniformly at random and with replacement, and adjusts its opinion according to the sample. A prominent example is the so-called two-choices algorithm in the synchronous model, where in each round, every node chooses two neighbors uniformly at random, and if the two sampled opinions coincide, then that opinion is adopted. This protocol is very efficient and well-studied when @math . If @math for some small @math , we show that it converges to the initial plurality opinion within @math rounds, w.h.p., as long as the initial difference between the largest and second largest opinion is @math . On the other side, we show that there are cases in which @math rounds are needed, w.h.p. One can beat this lower bound in the synchronous model by combining the two-choices protocol with randomized broadcasting. Our main contribution is a non-trivial adaptation of this approach to the asynchronous model. If the support of the most frequent opinion is at least @math times that of the second-most frequent one and @math , then our protocol achieves the best possible run time of @math , w.h.p. We relax full synchronicity by allowing @math nodes to be poorly synchronized, and the well synchronized nodes are only required to be within a certain time difference from one another. We enforce this synchronicity by introducing a novel gadget into the protocol.", "We prove the first nontrivial (superlinear) lower bound in the noisy broadcast model, defined by El Gamal in [Open problems presented at the @math workshop on Specific Problems in Communication and Computation sponsored by Bell Communication Research, in Open Problems in Communication and Computation, T. M. Cover and B. Gopinath, eds., Springer-Verlag, New York, 1987, pp. 60-62]. In this model there are @math processors @math , each of which is initially given a private input bit @math . The goal is for @math to learn the value of @math , for some specified function @math , using a series of noisy broadcasts. At each step a designated processor broadcasts one bit to all of the other processors, and the bit received by each processor is flipped with fixed probability (independently for each recipient). In 1988, Gallager [IEEE Trans. Inform. Theory, 34 (1988), pp. 176-180] gave a noise-resistant protocol that allows @math to learn the entire input with constant probability in @math broadcasts. We prove that Gallager's protocol is optimal, up to a constant factor. Our lower bound follows by reduction from a lower bound for generalized noisy decision trees, a new model which may be of independent interest. For this new model we show a lower bound of @math on the depth of a tree that learns the entire input. While the above lower bound is for an @math -bit function, we also show an @math lower bound for the number of broadcasts required to compute certain explicit boolean-valued functions, when the correct output must be attained with probability at least @math for a constant parameter @math (this bound applies to all threshold functions as well as any other boolean-valued function with linear sensitivity). This bound also follows by reduction from a lower bound of @math on the depth of generalized noisy decision trees that compute the same functions with the same error. We also show a (nontrivial) @math lower bound on the depth of generalized noisy decision trees that compute such functions with small constant error. Finally, we show the first protocol in the noisy broadcast model that computes the Hamming weight of the input using a linear number of broadcasts." ] }
1908.04933
2968171141
Re-Pair is a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large scale data sets. As a solution for this problem we present, given a text of length @math whose characters are drawn from an integer alphabet, an @math time algorithm computing Re-Pair in @math bits of space including the text space, where @math is the number of terminals and non-terminals. The algorithm works in the restore model, supporting the recovery of the original input in the time for the Re-Pair computation with @math additional bits of working space. We give variants of our solution working in parallel or in the external memory model.
In-Place String Algorithms For the LZ77 factorization @cite_7 , present an algorithm computing this factorization with n d words on top of the input space in dn time for a variable @math , achieving 1 words with n^2 time. For the suffix sorting problem, gave an algorithm to compute the suffix array with n bits on top of the output in n time if each character of the alphabet is present in the text. This condition got improved to alphabet sizes of at most @math by . Finally, showed how to transform a text into its Burrows-Wheeler transform by using n of additional bits. Due to , this algorithm got extended to compute simultaneously the LCP array with n bits of additional working space.
{ "cite_N": [ "@cite_7" ], "mid": [ "2963440221", "2522621913", "2044014345", "2159647614" ], "abstract": [ "We show that the compressed suffix array and the compressed suffix tree of a string T can be built in O(n) deterministic time using O(n log σ) bits of space, where n is the string length and σ is the alphabet size. Previously described deterministic algorithms either run in time that depends on the alphabet size or need ω(n log σ) bits of working space. Our result has immediate applications to other problems, such as yielding the first deterministic linear-time LZ77 and LZ78 parsing algorithms that use O(n log σ) bits.", "The field of succinct data structures has flourished over the last 16 years. Starting from the compressed suffix array (CSA) by Grossi and Vitter (STOC 2000) and the FM-index by Ferragina and Manzini (FOCS 2000), a number of generalizations and applications of string indexes based on the Burrows-Wheeler transform (BWT) have been developed, all taking an amount of space that is close to the input size in bits. In many large-scale applications, the construction of the index and its usage need to be considered as one unit of computation. Efficient string indexing and analysis in small space lies also at the core of a number of primitives in the data-intensive field of high-throughput DNA sequencing. We report the following advances in string indexing and analysis. We show that the BWT of a string @math can be built in deterministic @math time using just @math bits of space, where @math . Within the same time and space budget, we can build an index based on the BWT that allows one to enumerate all the internal nodes of the suffix tree of @math . Many fundamental string analysis problems can be mapped to such enumeration, and can thus be solved in deterministic @math time and in @math bits of space from the input string. We also show how to build many of the existing indexes based on the BWT, such as the CSA, the compressed suffix tree (CST), and the bidirectional BWT index, in randomized @math time and in @math bits of space. The previously fastest construction algorithms for BWT, CSA and CST, which used @math bits of space, took @math time for the first two structures, and @math time for the third, where @math is any positive constant. Contrary to the state of the art, our bidirectional BWT index supports every operation in constant time per element in its output.", "We consider a generalization of the problem of supporting rank and select queries on binary strings. Given a string of length n from an alphabet of size σ, we give the first representation that supports rank and access operations in O(lg lg σ) time, and select in O(1) time while using the optimal n lg σ + o(n lg σ) bits. The best known previous structure for this problem required O(lg σ) time, for general values of σ. Our results immediately improve the search times of a variety of text indexing methods.", "We design two compressed data structures for the full-text indexing problem that support efficient substring searches using roughly the space required for storing the text in compressed form.Our first compressed data structure retrieves the occ occurrences of a pattern P[1,p] within a text T[1,n] in O(p p occ log1pe n) time for any chosen e, 0 k (T) p o(n) bits of storage, where H k (T) is the kth order empirical entropy of T. The space usage is Θ(n) bits in the worst case and o(n) bits for compressible texts. This data structure exploits the relationship between suffix arrays and the Burrows--Wheeler Transform, and can be regarded as a compressed suffix array.Our second compressed data structure achieves O(ppocc) query time using O(nH k (T)loge n) p o(n) bits of storage for any chosen e, 0<e<1. Therefore, it provides optimal output-sensitive query time using o(nlog n) bits in the worst case. This second data structure builds upon the first one and exploits the interplay between two compressors: the Burrows--Wheeler Transform and the LZ78 algorithm." ] }