aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1908.08705
2969664989
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models.
Another interesting approach @cite_14 uses the concept of nested adversarial examples where separate non-overlapping adversarial perturbations are generated for close and far distances. This attack is designed for Faster R-CNN and YOLOv3 @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_14" ], "mid": [ "2796347433", "2947773286" ], "abstract": [ "We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL", "In this paper, we design the first practical adversarial attacks against object detectors in realistic situations: the printed adversarial examples (AEs) placed with different angles and distances, could hide or create an object to deceive object detectors. To improve the robustness of AEs, we propose the novel nested AEs and introduce the image transformation techniques to simulate the variance factors such as distances, angles, illuminations, in the physical world. To let the AEs converge with so many constraints, we also design batch-variation momentum training. Evaluation results show that our adversarial examples work in real environments, capable of attacking state-of-the-art real-time object detectors (e.g., YOLO V3 and faster-RCNN) with the success rate up to 92.4 (with the distance ranging from 1m to 25m and wide angles from -60°to 60°). Our AEs are also shown to be highly transferable. Four state-of-the-art black-box models could be attacked by our AEs with high success rates." ] }
1908.08705
2969664989
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models.
A few works are devoted to more complex approaches. One of such works @cite_29 proposes to use EOT, NPS, and TV loss for fooling YOLOv2-based person detector. Another one @cite_2 is devoted to fooling the Face ID system using adversarial generative nets (a sort of GANs @cite_18 ) where the generator produces the eyeglasses frame perturbation.
{ "cite_N": [ "@cite_29", "@cite_18", "@cite_2" ], "mid": [ "2938470389", "1710476689", "2932026309" ], "abstract": [ "Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn \"patches\" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”", "Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. In this article, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. In particular, we demonstrate physical adversarial examples—eyeglass frames designed to fool face recognition—with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Stream Processing. There are many important problems for stream data processing, including event detection @cite_35 , outlier detection @cite_16 , top- @math query @cite_37 , join @cite_38 @cite_22 , skyline query @cite_12 , nearest neighbor query @cite_33 , aggregate query @cite_17 , and so on. These works usually assume that stream data are either certain or uncertain. To our best knowledge, they cannot be directly applied to our Join-iDS problem, under the semantics of incomplete data streams.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_38", "@cite_22", "@cite_33", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2613057181", "2106742801", "2118924829", "2138646997", "2049040861", "2794689483", "2109821166", "2150397765" ], "abstract": [ "Event processing applications from financial fraud detection to health care analytics continuously execute event queries with Kleene closure to extract event sequences of arbitrary, statically unknown length, called Complete Event Trends (CETs). Due to common event sub-sequences in CETs, either the responsiveness is delayed by repeated computations or an exorbitant amount of memory is required to store partial results. To overcome these limitations, we define the CET graph to compactly encode all CETs matched by a query. Based on the graph, we define the spectrum of CET detection algorithms from CPU-optimal to memory-optimal. We find the middle ground between these two extremes by partitioning the graph into time-centric graphlets and caching partial CETs per graphlet to enable effective reuse of these intermediate results. We reveal cost monotonicity properties of the search space of graph partitioning plans. Our CET optimizer leverages these properties to prune significant portions of the search to produce a partitioning plan with minimal CPU costs yet within the given memory limit. Our experimental study demonstrates that our CET detection solution achieves up to 42--fold speed-up even under rigid memory constraints compared to the state-of-the-art techniques in diverse scenarios.", "Given a dataset P and a preference function f, a top-k query retrieves the k tuples in P with the highest scores according to f. Even though the problem is well-studied in conventional databases, the existing methods are inapplicable to highly dynamic environments involving numerous long-running queries. This paper studies continuous monitoring of top-k queries over a fixed-size window W of the most recent data. The window size can be expressed either in terms of the number of active tuples or time units. We propose a general methodology for top-k monitoring that restricts processing to the sub-domains of the workspace that influence the result of some query. To cope with high stream rates and provide fast answers in an on-line fashion, the data in W reside in main memory. The valid records are indexed by a grid structure, which also maintains book-keeping information. We present two processing techniques: the first one computes the new answer of a query whenever some of the current top-k points expire; the second one partially pre-computes the future changes in the result, achieving better running time at the expense of slightly higher space requirements. We analyze the performance of both algorithms and evaluate their efficiency through extensive experiments. Finally, we extend the proposed framework to other query types and a different data stream model.", "We consider the problem of approximating sliding window joins over data streams in a data stream processing system with limited resources. In our model, we deal with resource constraints by shedding load in the form of dropping tuples from the data streams. We first discuss alternate architectural models for data stream join processing, and we survey suitable measures for the quality of an approximation of a set-valued query result. We then consider the number of generated result tuples as the quality measure, and we give optimal offline and fast online algorithms for it. In a thorough experimental study with synthetic and real data we show the efficacy of our solutions. For applications with demand for exact results we introduce a new Archive-metric which captures the amount of work needed to complete the join in case the streams are archived for later processing.", "Similarity join (SJ) in time-series databases has a wide spectrum of applications such as data cleaning and mining. Specifically, an SJ query retrieves all pairs of (sub)sequences from two time-series databases that epsiv-match with each other, where epsiv is the matching threshold. Previous work on this problem usually considers static time-series databases, where queries are performed either on disk-based multidimensional indexes built on static data or by nested loop join (NLJ) without indexes. SJ over multiple stream time series, which continuously outputs pairs of similar subsequences from stream time series, strongly requires low memory consumption, low processing cost, and query procedures that are themselves adaptive to time-varying stream data. These requirements invalidate the existing approaches in static databases. In this paper, we propose an efficient and effective approach to perform SJ among multiple stream time series incrementally. In particular, we present a novel method, Adaptive Radius-based Search (ARES), which can answer the similarity search without false dismissals and is seamlessly integrated into SJ processing. Most importantly, we provide a formal cost model for ARES, based on which ARES can be adaptive to data characteristics, achieving the minimum number of refined candidate pairs, and thus, suitable for stream processing. Furthermore, in light of the cost model, we utilize space-efficient synopses that are constructed for stream time series to further reduce the candidate set. Extensive experiments demonstrate the efficiency and effectiveness of our proposed approach.", "Efficiently processing continuous k-nearest neighbor queries on data streams is important in many application domains, e. g. for network intrusion detection. Usually not all valid data objects from the stream can be kept in main memory. Therefore, most existing solutions are approximative. In this paper, we propose an efficient method for exact k-NN monitoring. Our method is based on three ideas, (1) selecting exactly those objects from the stream which are able to become the nearest neighbor of one or more continuous queries and storing them in a skyline data structure, (2) delaying to process those objects which are not immediately nearest neighbors of any query, and (3) indexing the queries rather than the streaming objects. In an extensive experimental evaluation we demonstrate that our method is applicable on high throughput data streams requiring only very limited storage.", "Data streams are sequences of data points that have the properties of transiency, infiniteness, concept drift, uncertainty, multi-dimensionality, cross-correlation among different streams, asynchronous arrival, and heterogeneity. In this paper we propose a new outlier detection technique for multiple multi-dimensional data streams, called Wadjet, that addresses all the issues of outlier detection in multiple data streams. Wadjet exploits the temporal correlations to identify outliers in each individual data stream, and after this, it exploits the cross-correlations between data streams to identify points that do not conform with these cross-correlations. Experiments comparing Wadjet against existing techniques on real and synthetic datasets show that Wadjet achieves 18.8X higher precision, and competitive execution time and recall.", "The skyline of a multidimensional data set contains the \"best\" tuples according to any preference function that is monotonic on each dimension. Although skyline computation has received considerable attention in conventional databases, the existing algorithms are inapplicable to stream applications because 1) they assume static data that are stored in the disk (rather than continuously arriving expiring), 2) they focus on \"one-time\" execution that returns a single skyline (in contrast to constantly tracking skyline changes), and 3) they aim at reducing the I O overhead (as opposed to minimizing the CPU-cost and main-memory consumption). This paper studies skyline computation in stream environments, where query processing takes into account only a \"sliding window\" covering the most recent tuples. We propose algorithms that continuously monitor the incoming data and maintain the skyline incrementally. Our techniques utilize several interesting properties of stream skylines to improve space time efficiency by expunging data from the system as early as possible (i.e., before their expiration). Furthermore, we analyze the asymptotical performance of the proposed solutions, and evaluate their efficiency with extensive experiments.", "Data stream management systems may be subject to higher input rates than their resources can handle. When overloaded, the system must shed load in order to maintain low-latency query results. In this paper, we describe a load shedding technique for queries consisting of one or more aggregate operators with sliding windows. We introduce a new type of drop operator, called a \"Window Drop\". This operator is aware of the window properties (i.e., window size and window slide) of its downstream aggregate operators in the query plan. Accordingly, it logically divides the input stream into windows and probabilistically decides which windows to drop. This decision is further encoded into tuples by marking the ones that are disallowed from starting new windows. Unlike earlier approaches, our approach preserves integrity of windows throughout a query plan, and always delivers subsets of original query answers with minimal degradation in result quality." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Differential Dependency. Differential dependency (DD) @cite_18 is a valuable tool for data imputation @cite_2 , data cleaning @cite_14 , data repairing @cite_39 , and so on. @cite_2 used the DDs to fill the missing attributes of incomplete objects on static data set via some detected neighbors satisfying the distance constraints on determinant attributes. @cite_39 @cite_23 also explored to repair labels of graph nodes. @cite_14 cleaned databases by removing inconsistent records that violate DDs. Unlike these works targeting at static database, we apply DD-based imputation to the streaming environment, which makes our Join-iDS problem more challenging.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_39", "@cite_23", "@cite_2" ], "mid": [ "2108132403", "2295468252", "2183774130", "2616903276", "2282784388" ], "abstract": [ "The importance of difference semantics (e.g., “similar” or “dissimilar”) has been recently recognized for declaring dependencies among various types of data, such as numerical values or text values. We propose a novel form of Differential Dependencies (dds), which specifies constraints on difference, called differential functions, instead of identification functions in traditional dependency notations like functional dependencies. Informally, a differential dependency states that if two tuples have distances on attributes X agreeing with a certain differential function, then their distances on attributes Y should also agree with the corresponding differential function on Y. For example, [date(l 7)]→[price( In this article, we first address several theoretical issues of differential dependencies, including formal definitions of dds and differential keys, subsumption order relation of differential functions, implication of dds, closure of a differential function, a sound and complete inference system, and minimal cover for dds. Then, we investigate a practical problem, that is, how to discover dds and differential keys from a given dataset. Due to the intrinsic hardness, we develop several pruning methods to improve the discovery efficiency in practice. Finally, through an extensive experimental evaluation on real datasets, we demonstrate the discovery performance and the effectiveness of dds in several real applications.", "Quantitative data cleaning relies on the use of statistical methods to identify and repair data quality problems while logical data cleaning tackles the same problems using various forms of logical reasoning over declarative dependencies. Each of these approaches has its strengths: the logical approach is able to capture subtle data quality problems using sophisticated dependencies, while the quantitative approach excels at ensuring that the repaired data has desired statistical properties. We propose a novel framework within which these two approaches can be used synergistically to combine their respective strengths. We instantiate our framework using (i) metric functional dependencies, a type of dependency that generalizes functional dependencies (FDs) to identify inconsistencies in domains where only large differences in metric data are considered to be a data quality problem, and (ii) repairs that modify the inconsistent data so as to minimize statistical distortion, measured using the Earth Mover's Distance. We show that the problem of computing a statistical distortion minimal repair is NP-hard. Given this complexity, we present an efficient algorithm for finding a minimal repair that has a small statistical distortion using EMD computation over semantically related attributes. To identify semantically related attributes, we present a sound and complete axiomatization and an efficient algorithm for testing implication of metric FDs. While the complexity of inference for some other FD extensions is co-NP complete, we show that the inference problem for metric FDs remains linear, as in traditional FDs. We prove that every instance that can be generated by our repair algorithm is set-minimal (with no unnecessary changes). Our experimental evaluation demonstrates that our techniques obtain a considerably lower statistical distortion than existing repair techniques, while achieving similar levels of efficiency.", "A broad class of data, ranging from similarity networks, workflow networks to protein networks, can be modeled as graphs with data values as vertex labels. The vertex labels (data values) are often dirty for various reasons such as typos or erroneous reporting of results in scientific experiments. Neighborhood constraints, specifying label pairs that are allowed to appear on adjacent vertexes in the graph, are employed to detect and repair erroneous vertex labels. In this paper, we study the problem of repairing vertex labels to make graphs satisfy neighborhood constraints. Unfortunately, the relabeling problem is proved to be NP hard, which motivates us to devise approximation methods for repairing, and identify interesting special cases (star and clique constraints) that can be efficiently solved. We propose several approximate repairing algorithms including greedy heuristics, contraction method and a hybrid approach. The performances of algorithms are also analyzed for the special case. Our extensive experimental evaluation, on both synthetic and real data, demonstrates the effectiveness of eliminating frauds in several types of application networks. Remarkably, the hybrid method performs well in practice, i.e., guarantees termination, while achieving high effectiveness at the same time.", "A broad class of data, ranging from similarity networks, workflow networks to protein networks, can be modeled as graphs with data values as vertex labels. Both vertex labels and neighbors could be dirty for various reasons such as typos or erroneous reporting of results in scientific experiments. Neighborhood constraints, specifying label pairs that are allowed to appear on adjacent vertices in the graph, are employed to detect and repair erroneous vertex labels and neighbors. In this paper, we study the problem of repairing vertex labels and neighbors to make graphs satisfy neighborhood constraints. Unfortunately, the problem is generally hard, which motivates us to devise approximation methods for repairing and identify interesting special cases (star and clique constraints) that can be efficiently solved. First, we propose several label repairing approximation algorithms including greedy heuristics, contraction method and an approach combining both. The performances of algorithms are also analyzed for the special case. Moreover, we devise a cubic-time constant-factor graph repairing algorithm with both label and neighbor repairs (given degree-bounded instance graphs). Our extensive experimental evaluation on real data demonstrates the effectiveness of eliminating frauds in several types of application networks.", "Incomplete information often occur along with many database applications, e.g., in data integration, data cleaning or data exchange. The idea of data imputation is to fill the missing data with the values of its neighbors who share the same information. Such neighbors could either be identified certainly by editing rules or statistically by relational dependency networks. Unfortunately, owing to data sparsity, the number of neighbors (identified w.r.t. value equality) is rather limited, especially in the presence of data values with variances. In this paper, we argue to extensively enrich similarity neighbors by similarity rules with tolerance to small variations. More fillings can thus be acquired that the aforesaid equality neighbors fail to reveal. To fill the missing values more, we study the problem of maximizing the missing data imputation. Our major contributions include (1) the np-hardness analysis on solving and approximating the problem, (2) exact algorithms for tackling the problem, and (3) efficient approximation with performance guarantees. Experiments on real and synthetic data sets demonstrate that the filling accuracy can be improved." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Join Over Certain Uncertain Databases. The join operator was traditionally used in relational databases @cite_15 or data streams @cite_38 . The join predicate may follow equality semantics between attributes of tuples or data objects. According to predicate constraints, join over uncertain databases @cite_11 @cite_22 can be classified into two categories, probabilistic join query (PJQ) and probabilistic similarity join (PSJ), which return pairs of joining objects that are identical or similar (e.g., within @math -distance from each other), resp., with high confidences.
{ "cite_N": [ "@cite_38", "@cite_15", "@cite_22", "@cite_11" ], "mid": [ "2118924829", "2015372451", "2138646997", "2129553531" ], "abstract": [ "We consider the problem of approximating sliding window joins over data streams in a data stream processing system with limited resources. In our model, we deal with resource constraints by shedding load in the form of dropping tuples from the data streams. We first discuss alternate architectural models for data stream join processing, and we survey suitable measures for the quality of an approximation of a set-valued query result. We then consider the number of generated result tuples as the quality measure, and we give optimal offline and fast online algorithms for it. In a thorough experimental study with synthetic and real data we show the efficacy of our solutions. For applications with demand for exact results we introduce a new Archive-metric which captures the amount of work needed to complete the join in case the streams are archived for later processing.", "The join operation is one of the fundamental relational database query operations. It facilitates the retrieval of information from two different relations based on a Cartesian product of the two relations. The join is one of the most diffidult operations to implement efficiently, as no predefined links between relations are required to exist (as they are with network and hierarchical systems). The join is the only relational algebra operation that allows the combining of related tuples from relations on different attribute schemes. Since it is executed frequently and is expensive, much research effort has been applied to the optimization of join processing. In this paper, the different kinds of joins and the various implementation techniques are surveyed. These different methods are classified based on how they partition tuples from different relations. Some require that all tuples from one be compared to all tuples from another; other algorithms only compare some tuples from each. In addition, some techniques perform an explicit partitioning, whereas others are implicit.", "Similarity join (SJ) in time-series databases has a wide spectrum of applications such as data cleaning and mining. Specifically, an SJ query retrieves all pairs of (sub)sequences from two time-series databases that epsiv-match with each other, where epsiv is the matching threshold. Previous work on this problem usually considers static time-series databases, where queries are performed either on disk-based multidimensional indexes built on static data or by nested loop join (NLJ) without indexes. SJ over multiple stream time series, which continuously outputs pairs of similar subsequences from stream time series, strongly requires low memory consumption, low processing cost, and query procedures that are themselves adaptive to time-varying stream data. These requirements invalidate the existing approaches in static databases. In this paper, we propose an efficient and effective approach to perform SJ among multiple stream time series incrementally. In particular, we present a novel method, Adaptive Radius-based Search (ARES), which can answer the similarity search without false dismissals and is seamlessly integrated into SJ processing. Most importantly, we provide a formal cost model for ARES, based on which ARES can be adaptive to data characteristics, achieving the minimum number of refined candidate pairs, and thus, suitable for stream processing. Furthermore, in light of the cost model, we utilize space-efficient synopses that are constructed for stream time series to further reduce the candidate set. Extensive experiments demonstrate the efficiency and effectiveness of our proposed approach.", "Similarity join processing in the streaming environment has many practical applications such as sensor networks, object tracking and monitoring, and so on. Previous works usually assume that stream processing is conducted over precise data. In this paper, we study an important problem of similarity join processing on stream data that inherently contain uncertainty (or called uncertain data streams), where the incoming data at each time stamp are uncertain and imprecise. Specifically, we formalize this problem as join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data. To tackle the challenges with respect to efficiency and effectiveness such as limited memory and small response time, we propose effective pruning methods on both object and sample levels to filter out false alarms. We integrate the proposed pruning methods into an efficient query procedure that can incrementally maintain the USJ answers. Most importantly, we further design a novel strategy, namely, adaptive superset prejoin (ASP), to maintain a superset of USJ candidate pairs. ASP is in light of our proposed formal cost model such that the average USJ processing cost is minimized. We have conducted extensive experiments to demonstrate the efficiency and effectiveness of our proposed approaches." ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
PSJ has received much attention in many domains. @cite_19 applied PSJ to integrate heterogeneous RDF graphs by introducing an equivalent semantics for RDF graphs. @cite_31 proposed an effective filter-based method for high-dimensional vector similarity join. @cite_30 explored how to leverage relations between sets to proceed the exact set similarity. @cite_5 proposed a prefix tree index to join multi-attribute Data. @cite_40 applied PSJ in trajectory similarity join in spatial networks via some search space pruning techniques. @cite_9 proposed a join approach for massive high-dimensional data, based on a particular order of data points via a grid. Different from @cite_9 that uses the grid cell for sorting data points, in our work, we designed a grid variant, @math -grid, which stores additional information (e.g., queues with imputed objects) specific for incomplete data streams, and supports the dynamic maintenance of candidate join answers.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_19", "@cite_40", "@cite_5", "@cite_31" ], "mid": [ "2619410666", "", "2741223254", "2751694342", "1990947147", "" ], "abstract": [ "Exact set similarity join, which finds all the similar set pairs from two collections of sets, is a fundamental problem with a wide range of applications. The existing solutions for set similarity join follow a filtering-verification framework, which generates a list of candidate pairs through scanning indexes in the filtering phase, and reports those similar pairs in the verification phase. Though much research has been conducted on this problem, set relations, which we find out is quite effective on improving the algorithm efficiency through computational cost sharing, have never been studied. Therefore, in this paper, instead of considering each set individually, we explore the set relations in different levels to reduce the overall computational costs. First, it has been shown that most of the computational time is spent on the filtering phase, which can be quadratic to the number of sets in the worst case for the existing solutions. Thus we explore index-level set relations to reduce the filtering cost to be linear to the size of the input while keeping the same filtering power. We achieve this by grouping related sets into blocks in the index and skipping useless index probes in joins. Second, we explore answer-level set relations to further improve the algorithm based on the intuition that if two sets are similar, their answers may have a large overlap. We derive an algorithm which incrementally generates the answer of one set from an already computed answer of another similar set rather than compute the answer from scratch to reduce the computational cost. Finally, we conduct extensive performance studies using 21 real datasets with various data properties from a wide range of domains. The experimental results demonstrate that our algorithm outperforms all the existing algorithms across all datasets and can achieve more than an order of magnitude speedup against the state- of-the-art algorithms.", "", "Semi-structured data models like the Resource Description Framework (RDF), naturally allow for modeling the same real-world entity in various ways. For example, different RDF vocabularies enable the definition of various RDF graphs representing the same drug in Bio2RDF or Drugbank. Albeit semantically equivalent, these RDF graphs may be syntactically different, i.e., they have distinctive graph structure or entity identifiers and properties. Existing data-driven integration approaches only consider syntactic matching criteria or similarity measures to solve the problem of integrating RDF graphs. However, syntactic-based approaches are unable to semantically integrate heterogeneous RDF graphs. We devise SJoin, a semantic similarity join operator to solve the problem of matching semantically equivalent RDF graphs, i.e., syntactically different graphs corresponding to the same real-world entity. Two physical implementations are proposed for SJoin which follow blocking or non-blocking data processing strategies, i.e., RDF graphs can be merged in a batch or incrementally. We empirically evaluate the effectiveness and efficiency of the SJoin physical operators with respect to baseline similarity join algorithms. Experimental results suggest that SJoin outperforms baseline approaches, i.e., non-blocking SJoin incrementally produces results faster, while the blocking SJoin accurately matches all semantically equivalent RDF graphs.", "The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider the case of trajectory similarity join (TS-Join), where the objects are trajectories of vehicles moving in road networks. Thus, given two sets of trajectories and a threshold θ, the TS-Join returns all pairs of trajectories from the two sets with similarity above θ. This join targets applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide a purposeful definition of similarity. To enable efficient TS-Join processing on large sets of trajectories, we develop search space pruning techniques and take into account the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer algorithm. For each trajectory, the algorithm first finds similar trajectories. Then it merges the results to achieve a final result. The algorithm exploits an upper bound on the spatiotemporal similarity and a heuristic scheduling strategy for search space pruning. The algorithm's per-trajectory searches are independent of each other and can be performed in parallel, and the merging has constant cost. An empirical study with real data offers insight in the performance of the algorithm and demonstrates that is capable of outperforming a well-designed baseline algorithm by an order of magnitude.", "In this paper we study similarity join and search on multi- attribute data. Traditional methods on single-attribute data have pruning power only on single attributes and cannot efficiently support multi-attribute data. To address this problem, we propose a prefix tree index which has holis- tic pruning ability on multiple attributes. We propose a cost model to quantify the prefix tree which can guide the prefix tree construction. Based on the prefix tree, we devise a filter-verification framework to support similarity search and join on multi-attribute data. The filter step prunes a large number of dissimilar results and identifies some candi- dates using the prefix tree and the verification step verifies the candidates to generate the final answer. For similar- ity join, we prove that constructing an optimal prefix tree is NP-complete and develop a greedy algorithm to achieve high performance. For similarity search, since one prefix tree cannot support all possible search queries, we extend the cost model to support similarity search and devise a budget-based algorithm to construct multiple high-quality prefix trees. We also devise a hybrid verification algorithm to improve the verification step. Experimental results show our method significantly outperforms baseline approaches.", "" ] }
1908.08654
2969398112
For decades, the join operator over fast data streams has always drawn much attention from the database community, due to its wide spectrum of real-world applications, such as online clustering, intrusion detection, sensor data monitoring, and so on. Existing works usually assume that the underlying streams to be joined are complete (without any missing values). However, this assumption may not always hold, since objects from streams may contain some missing attributes, due to various reasons such as packet losses, network congestion failure, and so on. In this paper, we formalize an important problem, namely join over incomplete data streams (Join-iDS), which retrieves joining object pairs from incomplete data streams with high confidences. We tackle the Join-iDS problem in the style of "data imputation and query processing at the same time". To enable this style, we design an effective and efficient cost-model-based imputation method via deferential dependency (DD), devise effective pruning strategies to reduce the Join-iDS search space, and propose efficient algorithms via our proposed cost-model-based data synopsis indexes. Extensive experiments have been conducted to verify the efficiency and effectiveness of our proposed Join-iDS approach on both real and synthetic data sets.
Incomplete Databases. In the literature of incomplete databases, the most commonly used imputation methods include rule-based @cite_1 , statistical-based @cite_36 , pattern-based @cite_20 , constraint-based @cite_42 imputation, and so on. These existing works may incur the accuracy problem for sparse data sets. That is, sometimes, they may not be able to find samples to impute the missing attributes in sparse data sets, which may lead to problems such as imputation failure or even wrong imputation result @cite_18 . To avoid or alleviate this problem, in this paper, we use DDs to impute missing attributes based on a historical (complete) data repository @math . We will consider the regression-based imputation approaches (e.g., @cite_32 ) as our future work.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_42", "@cite_1", "@cite_32", "@cite_20" ], "mid": [ "2108132403", "2164187405", "2734096706", "2161163216", "2951565755", "2605046998" ], "abstract": [ "The importance of difference semantics (e.g., “similar” or “dissimilar”) has been recently recognized for declaring dependencies among various types of data, such as numerical values or text values. We propose a novel form of Differential Dependencies (dds), which specifies constraints on difference, called differential functions, instead of identification functions in traditional dependency notations like functional dependencies. Informally, a differential dependency states that if two tuples have distances on attributes X agreeing with a certain differential function, then their distances on attributes Y should also agree with the corresponding differential function on Y. For example, [date(l 7)]→[price( In this article, we first address several theoretical issues of differential dependencies, including formal definitions of dds and differential keys, subsumption order relation of differential functions, implication of dds, closure of a differential function, a sound and complete inference system, and minimal cover for dds. Then, we investigate a practical problem, that is, how to discover dds and differential keys from a given dataset. Due to the intrinsic hardness, we develop several pruning methods to improve the discovery efficiency in practice. Finally, through an extensive experimental evaluation on real datasets, we demonstrate the discovery performance and the effectiveness of dds in several real applications.", "Real-world databases often contain syntactic and semantic errors, in spite of integrity constraints and other safety measures incorporated into modern DBMSs. We present ERACER, an iterative statistical framework for inferring missing information and correcting such errors automatically. Our approach is based on belief propagation and relational dependency networks, and includes an efficient approximate inference algorithm that is easily implemented in standard DBMSs using SQL and user defined functions. The system performs the inference and cleansing tasks in an integrated manner, using shrinkage techniques to infer correct values accurately even in the presence of dirty data. We evaluate the proposed methods empirically on multiple synthetic and real-world data sets. The results show that our framework achieves accuracy comparable to a baseline statistical method using Bayesian networks with exact inference. However, our framework has wider applicability than the Bayesian network baseline, due to its ability to reason with complex, cyclic relational dependencies.", "Errors are prevalent in time series data, such as GPS trajectories or sensor readings. Existing methods focus more on anomaly detection but not on repairing the detected anomalies. By simply filtering out the dirty data via anomaly detection, applications could still be unreliable over the incomplete time series. Instead of simply discarding anomalies, we propose to (iteratively) repair them in time series data, by creatively bonding the beauty of temporal nature in anomaly detection with the widely considered minimum change principle in data repairing. Our major contributions include: (1) a novel framework of iterative minimum repairing (IMR) over time series data, (2) explicit analysis on convergence of the proposed iterative minimum repairing, and (3) efficient estimation of parameters in each iteration. Remarkably, with incremental computation, we reduce the complexity of parameter estimation from O(n) to O(1). Experiments on real datasets demonstrate the superiority of our proposal compared to the state-of-the-art approaches. In particular, we show that (the proposed) repairing indeed improves the time series classification application.", "A variety of integrity constraints have been studied for data cleaning. While these constraints can detect the presence of errors, they fall short of guiding us to correct the errors. Indeed, data repairing based on these constraints may not find certain fixes that are absolutely correct, and worse, may introduce new errors when repairing the data. We propose a method for finding certain fixes, based on master data, a notion of certain regions, and a class of editing rules. A certain region is a set of attributes that are assured correct by the users. Given a certain region and master data, editing rules tell us what attributes to fix and how to update them. We show how the method can be used in data monitoring and enrichment. We develop techniques for reasoning about editing rules, to decide whether they lead to a unique fix and whether they are able to fix all the attributes in a tuple, relative to master data and a certain region. We also provide an algorithm to identify minimal certain regions, such that a certain fix is warranted by editing rules and master data as long as one of the regions is correct. We experimentally verify the effectiveness and scalability of the algorithm.", "Missing numerical values are prevalent, e.g., owing to unreliable sensor reading, collection and transmission among heterogeneous sources. Unlike categorized data imputation over a limited domain, the numerical values suffer from two issues: (1) sparsity problem, the incomplete tuple may not have sufficient complete neighbors sharing the same similar values for imputation, owing to the (almost) infinite domain; (2) heterogeneity problem, different tuples may not fit the same (regression) model. In this study, enlightened by the conditional dependencies that hold conditionally over certain tuples rather than the whole relation, we propose to learn a regression model individually for each complete tuple together with its neighbors. Our IIM, Imputation via Individual Models, thus no longer relies on sharing similar values among the k complete neighbors for imputation, but utilizes their regression results by the aforesaid learned individual (not necessary the same) models. Remarkably, we show that some existing methods are indeed special cases of our IIM, under the extreme settings of the number l of learning neighbors considered in individual learning. In this sense, a proper number l of neighbors is essential to learn the individual models (avoid over-fitting or under-fitting). We propose to adaptively learn individual models over various number l of neighbors for different complete tuples. By devising efficient incremental computation, the time complexity of learning a model reduces from linear to constant. Experiments on real data demonstrate that our IIM with adaptive learning achieves higher imputation accuracy than the existing approaches.", "" ] }
1908.08118
2969661104
Neural plasticity is an important functionality of human brain, in which number of neurons and synapses can shrink or expand in response to stimuli throughout the span of life. We model this dynamic learning process as an @math -norm regularized binary optimization problem, in which each unit of a neural network (e.g., weight, neuron or channel, etc.) is attached with a stochastic binary gate, whose parameters determine the level of activity of a unit in the network. At the beginning, only a small portion of binary gates (therefore the corresponding neurons) are activated, while the remaining neurons are in a hibernation mode. As the learning proceeds, some neurons might be activated or deactivated if doing so can be justified by the cost-benefit tradeoff measured by the @math -norm regularized objective. As the training gets mature, the probability of transition between activation and deactivation will diminish until a final hardening stage. We demonstrate that all of these learning dynamics can be modulated by a single parameter @math seamlessly. Our neural plasticity network (NPN) can prune or expand a network depending on the initial capacity of network provided by the user; it also unifies dropout (when @math ), traditional training of DNNs (when @math ) and interpolates between these two. To the best of our knowledge, this is the first learning framework that unifies network sparsification and network expansion in an end-to-end training pipeline. Extensive experiments on synthetic dataset and multiple image classification benchmarks demonstrate the superior performance of NPN. We show that both network sparsification and network expansion can yield compact models of similar architectures and of similar predictive accuracies that are close to or sometimes even higher than baseline networks. We plan to release our code to facilitate the research in this area.
Another closely related area is neural architecture search @cite_24 @cite_29 @cite_6 that searches for an optimal network architecture for a given learning task. It attempts to determine number of layers, types of layers, layer configurations, different activation functions, etc. Given the extremely large search space, typically reinforcement learning algorithms are utilized for efficient implementations. Our NPN can be categorized as a subset of neural architecture search in the sense that we start with a fixed architecture and aim to determine an optimal capacity (e.g., number of weights, neurons or channels) of a network.
{ "cite_N": [ "@cite_24", "@cite_29", "@cite_6" ], "mid": [ "2963374479", "2964081807", "2904817185" ], "abstract": [ "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "" ] }
1908.08118
2969661104
Neural plasticity is an important functionality of human brain, in which number of neurons and synapses can shrink or expand in response to stimuli throughout the span of life. We model this dynamic learning process as an @math -norm regularized binary optimization problem, in which each unit of a neural network (e.g., weight, neuron or channel, etc.) is attached with a stochastic binary gate, whose parameters determine the level of activity of a unit in the network. At the beginning, only a small portion of binary gates (therefore the corresponding neurons) are activated, while the remaining neurons are in a hibernation mode. As the learning proceeds, some neurons might be activated or deactivated if doing so can be justified by the cost-benefit tradeoff measured by the @math -norm regularized objective. As the training gets mature, the probability of transition between activation and deactivation will diminish until a final hardening stage. We demonstrate that all of these learning dynamics can be modulated by a single parameter @math seamlessly. Our neural plasticity network (NPN) can prune or expand a network depending on the initial capacity of network provided by the user; it also unifies dropout (when @math ), traditional training of DNNs (when @math ) and interpolates between these two. To the best of our knowledge, this is the first learning framework that unifies network sparsification and network expansion in an end-to-end training pipeline. Extensive experiments on synthetic dataset and multiple image classification benchmarks demonstrate the superior performance of NPN. We show that both network sparsification and network expansion can yield compact models of similar architectures and of similar predictive accuracies that are close to or sometimes even higher than baseline networks. We plan to release our code to facilitate the research in this area.
Compared to network sparsification, network expansion is a relatively less explored area. There are few existing works that can dynamically increase the capacity of network during training. For example, DNC @cite_16 sequentially adds neurons one at a time to the hidden layers of network until the desired approximation accuracy is achieved. @cite_17 proposes to train a denoising autoencoder (DAE) by adding in new neurons and later merging them with other neurons to prevent redundancy. For convolutional networks, @cite_18 proposes to widen or deepen a pretrained network for better knowledge transfer. Recently, a boosting-style method named AdaNet @cite_25 is used to adaptively grow the structure while learning the weights. However, all these approaches either only add neurons or add remove neurons manually. In contrast, our NPN can add or remove (deactivate) neurons during training as needed without human intervention, and is an end-to-end unified framework for network sparsification and expansion.
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2756154119", "2042481239", "2964091842", "2165310684" ], "abstract": [ "CNNs have made an undeniable impact on computer vision through the ability to learn high-capacity models with large annotated training sets. One of their remarkable properties is the ability to transfer knowledge from a large source dataset to a (typically smaller) target dataset. This is usually accomplished through fine-tuning a fixed-size network on new target data. Indeed, virtually every contemporary visual recognition system makes use of fine-tuning to transfer knowledge from ImageNet. In this work, we analyze what components and parameters change during fine-tuning, and discover that increasing model capacity allows for more natural model adaptation through fine-tuning. By making an analogy to developmental learning, we demonstrate that growing a CNN with additional units, either by widening existing layers or deepening the overall network, significantly outperforms classic fine-tuning approaches. But in order to properly grow a network, we show that newly-added units must be appropriately normalized to allow for a pace of learning that is consistent with existing units. We empirically validate our approach on several benchmark datasets, producing state-of-the-art results.", "Abstract This paper introduces a new method called Dynamic Node Creation (DNC) which automatically grows BP networks until the target problem is solved. DNC sequentially adds nodes one at a time to the hidden layer(s) of the network until the desired approximation accuracy is achieved. Simulation results for parity, symmetry, binary addition, and the encoder problem are presented. The procedure was capable of finding known minimal topologies in many cases, and was always within three nodes of the minimum. Computational expense for finding the solutions was comparable to training normal BP networks with the same final topologies. Starting out with fewer nodes than needed to solve the problem actually seems to help find a solution. The method yielded a solution for every problem tried.", "", "While determining model complexity is an important problem in machine learning, many feature learning algorithms rely on cross-validation to choose an optimal number of features, which is usually challenging for online learning from a massive stream of data. In this paper, we propose an incremental feature learning algorithm to determine the optimal model complexity for large-scale, online datasets based on the denoising autoencoder. This algorithm is composed of two processes: adding features and merging features. Specifically, it adds new features to minimize the objective function’s residual and merges similar features to obtain a compact feature representation and prevent over-fitting. Our experiments show that the proposed model quickly converges to the optimal number of features in a large-scale online setting. In classification tasks, our model outperforms the (non-incremental) denoising autoencoder, and deep networks constructed from our algorithm perform favorably compared to deep belief networks and stacked denoising autoencoders. Further, the algorithm is eective in recognizing new patterns when the data distribution changes over time in the massive online data stream." ] }
1908.08338
2969823110
Dynamic network slicing has emerged as a promising and fundamental framework for meeting 5G's diverse use cases. As machine learning (ML) is expected to play a pivotal role in the efficient control and management of these networks, in this work we examine the ML-based Quality-of-Transmission (QoT) estimation problem under the dynamic network slicing context, where each slice has to meet a different QoT requirement. We examine ML-based QoT frameworks with the aim of finding QoT model s that are fine-tuned according to the diverse QoT requirements. Centralized and distributed frameworks are examined and compared according to their accuracy and training time. We show that the distributed QoT models outperform the centralized QoT model, especially as the number of diverse QoT requirements increases.
Several ML applications have been already developed and explored for optical network planning purposes @cite_9 @cite_8 . In general, the state-of-the-art assumes that the optical network is centrally controlled by an SDN-based optical network controller @cite_11 @cite_20 , equipped with storage, processing, and monitoring capabilities (Fig. ). The main responsibility of the SDN-based controller is to efficiently manage the network resources in such a way that the diverse QoS requirements of the different use cases are met, as closely as possible, by the virtual network topology (VNT). The VNT can be viewed as a single slice (virtual network) that has to best fit all the diverse use cases.
{ "cite_N": [ "@cite_9", "@cite_11", "@cite_20", "@cite_8" ], "mid": [ "2781788077", "2885792477", "2809876146", "2964101383" ], "abstract": [ "Abstract Artificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.", "Optimization of end-to-end service provisioning across multi-domain EONs is critically important due to the restricted intra-domain visibility and the agile spectrum allocation mechanism of EONs. Previously demonstrated schemes apply fixed service provisioning policies (e.g., heuristic routing and spectrum assignment and threshold- based anomaly detection) and fail to fully address the inherent principle and dynamic conditions of multi-domain EONs. This article brings in data analytics and presents a knowledge-based autonomous service provisioning framework for multi-domain EONs. The proposed framework employs a new broker plane working with domain managers while supporting the autonomy of each management domain. This can be seen as a hierarchical multi-domain network control and management architecture in support of enhanced resource efficiency and network scalability where each hierarchy employs an observe-analyze-act cycle. Specifically, the broker and domain managers can observe the network state through comprehensive network monitoring (monitoring of optical paths' performance, resource utilization, and traffic dynamics), analyze the network behavior and form the knowledge repository of network rules using machine learning techniques, and act intelligently according to the obtained knowledge. We assess the proposed framework by evaluating two key knowledge-based service provisioning use cases: proactive inter-domain traffic-prediction- based autonomous traffic engineering and quality-of-transmission-aware inter-domain lightpath provisioning. Numerical results demonstrate remarkable benefits of the proposed framework over conventional solutions.", "5G, IoT and other emerging network applications drive the future optical network to be more flexible and dynamic. Fully awareness of current network status is critical for better network programming in short timescale. In this paper, the centralized network database with network monitoring data and network configuration information enables network analytics application to support the future dynamic and programmable optical network.", "Today’s telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users’ behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, machine learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission reception technologies, advanced digital signal processing, and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude this paper proposing new possible research directions." ] }
1908.08289
2969676305
Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: this https URL.
The inherent depth ambiguity in 3d pose estimation from monocular images limits the estimation accuracy. Extensive research has been done to exploit extra information contained in temporal sequences. Zhou al @cite_35 formulate an optimization problem to search for the 3d configuration with the highest probability given 2d confidence maps and solve the problem using Expectation-Maximization. Tekin al @cite_5 use a CNN to align bounding box of consecutive frames and then generate a spatial-temporal volume based on which they extract 3d HOG features and regress the 3d pose for the central frame. Mehta al @cite_4 propose a real-time system for 3d pose estimation and apply temporal filtering to yield temporally consistent 3d poses.
{ "cite_N": [ "@cite_35", "@cite_5", "@cite_4" ], "mid": [ "2963688992", "2963592930", "2611932403" ], "abstract": [ "This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables to take into account considerable uncertainties in 2D joint locations. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.", "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.", "We present the first real-time method to capture the full global 3D skelet al pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e., it works for outdoor scenes, community videos, and low quality commodity RGB cameras." ] }
1908.08289
2969676305
Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: this https URL.
Recently, RNN-based frameworks are used to deal with sequential input data. Lin al @cite_1 use a multi-stage framework based on Long Short-term Memory (LSTM) units to estimate the 3d pose from the extracted 2d features and estimated 3d pose in the previous stage. Coskun al @cite_32 propose to learn a human motion model using Kalman Filter and implement it with LSTMs. Hossain al @cite_36 design a sequence-to-sequence network with LSTM units to first encode a sequence of motions in the form of 2d joint locations and then decode the 3d poses of the sequence. However, RNNs are sensitive to erroneous inputs and tend to drift over long sequences. To overcome the shortcomings of RNNs, a CNN-based framework is proposed by Pavllo al @cite_26 to aggregate temporal information using dilated convolutions. Despite being successful at regressing a single frame from a sequence of input, it cannot concurrently output the 3d pose estimations for all frames in the sequence.
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_1", "@cite_32" ], "mid": [ "2769237672", "2903549000", "2963383668", "2963203809" ], "abstract": [ "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11 , and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semi-supervised settings where labeled data is scarce. Code and models are available at this https URL", "3D Human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation.", "One-shot pose estimation for tasks such as body joint localization, camera pose estimation, and object tracking are generally noisy, and temporal filters have been extensively used for regularization. One of the most widely-used methods is the Kalman filter, which is both extremely simple and general. However, Kalman filters require a motion model and measurement model to be specified a priori, which burdens the modeler and simultaneously demands that we use explicit models that are often only crude approximations of reality. For example, in the pose-estimation tasks mentioned above, it is common to use motion models that assume constant velocity or constant acceleration, and we believe that these simplified representations are severely inhibitive. In this work, we propose to instead learn rich, dynamic representations of the motion and noise models. In particular, we propose learning these models from data using long shortterm memory, which allows representations that depend on all previous observations and all previous states. We evaluate our method using three of the most popular pose estimation tasks in computer vision, and in all cases we obtain state-of-the-art performance." ] }
1908.08289
2969676305
Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: this https URL.
Inspired by matrix factorization methods commonly used in Structure-from-Motion (SfM) @cite_23 and non-rigid SfM @cite_33 , several works @cite_3 @cite_45 @cite_21 on 3d human pose estimation factorize the sequence of 3d human poses into a linear combination of shape bases. Akhter al @cite_42 suggest a duality of the factorization in the trajectory space. We extend the idea of matrix factorization to learning a deep network that estimates the coefficients of the trajectory bases from a sequence of 2d poses as inputs. The 3d poses of all frames are recovered concurrently as the linear combinations of the trajectory bases with the estimated coefficients.
{ "cite_N": [ "@cite_33", "@cite_21", "@cite_42", "@cite_3", "@cite_45", "@cite_23" ], "mid": [ "2124600577", "", "2138816964", "2155196764", "", "2138835141" ], "abstract": [ "The paper addresses the problem of recovering 3D non-rigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, C. Tomasi and T. Kanades' (1992) factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.", "", "Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes, which have to be estimated anew for each video sequence. In contrast, we propose that the evolving 3D structure be described by a linear combination of basis trajectories. The principal advantage of this approach is that we do not need to estimate any basis vectors during computation. We show that generic bases over trajectories, such as the Discrete Cosine Transform (DCT) basis, can be used to compactly describe most real motions. This results in a significant reduction in unknowns, and corresponding stability in estimation. We report empirical performance, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions including piece-wise rigid motion, partially nonrigid motion (such as a facial expression), and highly nonrigid motion (such as a person dancing).", "Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.", "", "Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orthography without computing depth as an intermediate step. An image stream can be represented by the 2FxP measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Based on this observation, the factorization method uses the singular-value decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively. Two of the three translation components are computed in a preprocessing stage. The method can also handle and obtain a full solution from a partially filled-in measurement matrix that may result from occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions." ] }
1908.08015
2969678456
Among various optimization algorithms, ADAM can achieve outstanding performance and has been widely used in model learning. ADAM has the advantages of fast convergence with both momentum and adaptive learning rate. For deep neural network learning problems, since their objective functions are nonconvex, ADAM can also get stuck in local optima easily. To resolve such a problem, the genetic evolutionary ADAM (GADAM) algorithm, which combines the ADAM and genetic algorithm, was introduced in recent years. To further maximize the advantages of the GADAM model, we propose to implement the boosting strategy for unit model training in GADAM. In this paper, we introduce a novel optimization algorithm, namely Boosting based GADAM (BGADAM). We will show that after adding the boosting strategy to the GADAM model, it can help unit models jump out the local optima and converge to better solutions.
: Deep learning models have achieved great success in recent years, whose representative examples include convolutional neural network(CNN) @cite_12 @cite_1 . CNN has been mainly used to deal with image data and shows outstanding performance on various computer vision tasks. Besides CNN, there also exist many other types of deep learning models, e.g., recurrent neural net @cite_10 @cite_9 , deep autoencoder @cite_21 , deep boltzmann machine @cite_13 , and GAN @cite_5 , etc.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_21", "@cite_1", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "189596042", "2091432990", "2025768430", "", "2099471712", "179875071", "2163605009" ], "abstract": [ "We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks.", "In this paper, we provide an overview of the invited and contributed papers presented at the special session at ICASSP-2013, entitled “New Types of Deep Neural Network Learning for Speech Recognition and Related Applications,” as organized by the authors. We also describe the historical context in which acoustic models based on deep neural networks have been developed. The technical overview of the papers presented in our special session is organized into five ways of improving deep learning methods: (1) better optimization; (2) better types of neural activation function and better network architectures; (3) better ways to determine the myriad hyper-parameters of deep neural networks; (4) more appropriate ways to preprocess speech for deep neural networks; and (5) ways of leveraging multiple languages or dialects that are more easily achieved with deep neural networks than with Gaussian mixture models.", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1908.08015
2969678456
Among various optimization algorithms, ADAM can achieve outstanding performance and has been widely used in model learning. ADAM has the advantages of fast convergence with both momentum and adaptive learning rate. For deep neural network learning problems, since their objective functions are nonconvex, ADAM can also get stuck in local optima easily. To resolve such a problem, the genetic evolutionary ADAM (GADAM) algorithm, which combines the ADAM and genetic algorithm, was introduced in recent years. To further maximize the advantages of the GADAM model, we propose to implement the boosting strategy for unit model training in GADAM. In this paper, we introduce a novel optimization algorithm, namely Boosting based GADAM (BGADAM). We will show that after adding the boosting strategy to the GADAM model, it can help unit models jump out the local optima and converge to better solutions.
: Boosting was proposed by @cite_8 . Given a training dataset containing @math training examples, a batch of @math training examples is generated by random sampling with replacement. We can generate @math training sets from the same original training data by applying the sampling @math times @cite_20 , which will be used to train @math different base models respectively in boosting.
{ "cite_N": [ "@cite_20", "@cite_8" ], "mid": [ "605727707", "2112076978" ], "abstract": [ "An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity and bias-variance decompositions, and recent progress in information theoretic diversity. Moving on to more advanced topics, the author explains how to achieve better performance through ensemble pruning and how to generate better clustering results by combining multiple clusterings. In addition, he describes developments of ensemble methods in semi-supervised learning, active learning, cost-sensitive learning, class-imbalance learning, and comprehensibility enhancement.", "In an earlier paper, we introduced a new \"boosting\" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a \"pseudo-loss\" which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's \"bagging\" method when used to aggregate various classifiers (including decision trees and single attribute- value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem." ] }
1908.07836
2969860478
Recognizing the layout of unstructured digital documents is an important step when parsing the documents into structured machine-readable format for downstream applications. Deep neural networks that are developed for computer vision have been proven to be an effective method to analyze layout of document images. However, document layout datasets that are currently publicly available are several magnitudes smaller than established computing vision datasets. Models have to be trained by transfer learning from a base model that is pre-trained on a traditional computer vision dataset. In this paper, we develop the PubLayNet dataset for document layout analysis by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. The size of the dataset is comparable to established computer vision datasets, containing over 360 thousand document images, where typical document layout elements are annotated. The experiments demonstrate that deep neural networks trained on PubLayNet accurately recognize the layout of scientific articles. The pre-trained models are also a more effective base mode for transfer learning on a different document domain. We release the dataset (https: github.com ibm-aur-nlp PubLayNet) to support development and evaluation of more advanced models for document layout analysis.
Existing datasets for document layout analysis rely on manual annotation. Some of these datasets are used in document processing challenges. Examples of these efforts are available in several ICDAR challenges @cite_19 , which cover as well complex layouts @cite_18 @cite_7 . The US NIH National Library of Medicine has provided the Medical Article Records Groundtruth (MARG) https: ceb.nlm.nih.gov inactive-communications-engineering-branch-projects medical-article-records-groundtruth-marg , which are obtained from scanned article pages.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_7" ], "mid": [ "2151765755", "2106311500", "2915774443" ], "abstract": [ "There is a significant need for a realistic dataset on which to evaluate layout analysis methods and examine their performance in detail. This paper presents a new dataset (and the methodology used to create it) based on a wide range of contemporary documents. Strong emphasis is placed on comprehensive and detailed representation of both complex and simple layouts, and on colour originals. In-depth information is recorded both at the page and region level. Ground truth is efficiently created using a new semi-automated tool and stored in a new comprehensive XML representation, the PAGE format. The dataset can be browsed and searched via a web-based front end to the underlying database and suitable subsets (relevant to specific evaluation goals) can be selected and downloaded.", "This paper presents a research dataset of historical newspapers comprising over 500 page images, uniquely representative of European cultural heritage from the digitization projects of 12 national and major European libraries, created within the scope of the large-scale digitisation Europeana Newspapers Project (ENP). Every image is accompanied by comprehensive ground truth (Unicode encoded full-text, layout information with precise region outlines, type labels, and reading order) in PAGE format and searchable metadata about document characteristics and artefacts. The first part of the paper describes the nature of the dataset, how it was built, and the challenges encountered. In the second part, a baseline for two state-of-the-art OCR systems (ABBYY FineReader Engine 11 and Tesseract 3.03) is given with regard to both text recognition and segmentation layout analysis performance.", "This paper presents an objective comparative evaluation of page segmentation and region classification methods for documents with complex layouts. It describes the competition (modus operandi, dataset and evaluation methodology) held in the context of ICDAR2017, presenting the results of the evaluation of seven methods – five submitted, two state-of-the-art systems (commercial and open-source). Three scenarios are reported in this paper, one evaluating the ability of methods to accurately segment regions and two evaluating both segmentation and region classification (one focusing only on text regions). For the first time, nested region content (table cells, chart labels etc.) are evaluated in addition to the top-level page content. Text recognition was a bonus challenge and was not taken up by all participants. The results indicate that an innovative approach has a clear advantage but there is still a considerable need to develop robust methods that deal with layout challenges, especially with the non-textual content." ] }
1908.07836
2969860478
Recognizing the layout of unstructured digital documents is an important step when parsing the documents into structured machine-readable format for downstream applications. Deep neural networks that are developed for computer vision have been proven to be an effective method to analyze layout of document images. However, document layout datasets that are currently publicly available are several magnitudes smaller than established computing vision datasets. Models have to be trained by transfer learning from a base model that is pre-trained on a traditional computer vision dataset. In this paper, we develop the PubLayNet dataset for document layout analysis by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. The size of the dataset is comparable to established computer vision datasets, containing over 360 thousand document images, where typical document layout elements are annotated. The experiments demonstrate that deep neural networks trained on PubLayNet accurately recognize the layout of scientific articles. The pre-trained models are also a more effective base mode for transfer learning on a different document domain. We release the dataset (https: github.com ibm-aur-nlp PubLayNet) to support development and evaluation of more advanced models for document layout analysis.
In addition to document layout, further understanding of the document content has been studied in the evaluation of table detection methods, e.g. @cite_13 @cite_21 . Examples include table detection from document images using heuristics @cite_11 , vertical arrangement of text blocks @cite_3 and deep learning methods @cite_9 @cite_2 @cite_0 @cite_12 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2786480153", "2150673968", "2321821989", "2787523828", "2444353601", "2051265407", "2797320628", "1506666816" ], "abstract": [ "Page segmentation and table detection play an important role in understanding the structure of documents. We present a page segmentation algorithm that incorporates state-of-the-art deep learning methods for segmenting three types of document elements: text blocks, tables, and figures. We propose a multi-scale, multi-task fully convolutional neural network (FCN) for the tasks of semantic page segmentation and element contour detection. The semantic segmentation network accurately predicts the probability at each pixel of the three element classes. The contour detection network accurately predicts instance level \"edges\" around each element occurrence. We propose a conditional random field (CRF) that uses features output from the semantic segmentation and contour networks to improve upon the semantic segmentation network output. Given the semantic segmentation output, we also extract individual table instances from the page using some heuristic rules and a verification network to remove false positives. We show that although we only consider a page image as input, we produce comparable results with other methods that relies on PDF file information and heuristics and hand crafted features tailored to specific types of documents. Our approach learns the representative features for page segmentation from real and synthetic training data. , and produces good results on real documents. The learning-based property makes it a more general method than existing methods in terms of document types and element appearances. For example, our method reliably detects sparsely lined tables which are hard for rule-based or heuristic methods.", "Table detection is an important task in the field of document analysis. It has been extensively studied since a couple of decades. Various kinds of document mediums are involved, from scanned images to web pages, from plain texts to PDF files. Numerous algorithms published bring up a challenging issue: how to evaluate algorithms in different context. Currently, most work on table detection conducts experiments on their in-house dataset. Even the few sources of online datasets are targeted at image documents only. Moreover, Precision and recall measurement are usual practice in order to account performance based on human evaluation. In this paper, we provide a dataset that is representative, large and most importantly, publicly available. The compatible format of the ground truth makes evaluation independent of document medium. We also propose a set of new measures, implement them, and open the source code. Finally, three existing table detection algorithms are evaluated to demonstrate the reliability of the dataset and metrics.", "Table detection is a challenging problem and plays an important role in document layout analysis. In this paper, we propose an effective method to identify the table region from document images. First, the regions of interest (ROIs) are recognized as the table candidates. In each ROI, we locate text components and extract text blocks. After that, we check all text blocks to determine if they are arranged horizontally or vertically and compare the height of each text block with the average height. If the text blocks satisfy a series of rules, the ROI is regarded as a table. Experiments on the ICDAR 2013 dataset show that the results obtained are very encouraging. This proves the effectiveness and superiority of our proposed method.", "Table detection is a crucial step in many document analysis applications as tables are used for presenting essential information to the reader in a structured manner. It is a hard problem due to varying layouts and encodings of the tables. Researchers have proposed numerous techniques for table detection based on layout analysis of documents. Most of these techniques fail to generalize because they rely on hand engineered features which are not robust to layout variations. In this paper, we have presented a deep learning based method for table detection. In the proposed method, document images are first pre-processed. These images are then fed to a Region Proposal Network followed by a fully connected neural network for table detection. The proposed method works with high precision on document images with varying layouts that include documents, research papers, and magazines. We have done our evaluations on publicly available UNLV dataset where it beats Tesseract's state of the art table detection system by a significant margin.", "Because of the better performance of deep learning on many computer vision tasks, researchers in the area of document analysis and recognition begin to adopt this technique into their work. In this paper, we propose a novel method for table detection in PDF documents based on convolutional neutral networks, one of the most popular deep learning models. In the proposed method, some table-like areas are selected first by some loose rules, and then the convolutional networks are built and refined to determine whether the selected areas are tables or not. Besides, the visual features of table areas are directly extracted and utilized through the convolutional networks, while the non-visual information (e.g. characters, rendering instructions) contained in original PDF documents is also taken into consideration to help achieve better recognition results. The primary experimental results show that the approach is effective in table detection.", "Table spotting and structural analysis are just a small fraction of tasks relevant when speaking of table analysis. Today, quite a large number of different approaches facing these tasks have been described in literature or are available as part of commercial OCR systems that claim to deal with tables on the scanned documents and to treat them accordingly. However, the problem of detecting tables is not yet solved at all. Different approaches have different strengths and weak points. Some fail in certain situations or layouts where others perform better. How shall one know, which approach or system is the best for his specific job? The answer to this question raises the demand for an objective comparison of different approaches which address the same task of spotting tables and recognizing their structure. This paper describes our approach towards establishing a complete and publicly available, hence open environment for the benchmarking of table spotting and structural analysis. We provide free access to the ground truthing tool and evaluation mechanism described in this paper, describe the ideas behind and we also provide ground truth for the 547 documents of the UNLV and UW-3 datasets that contain tables. In addition, we applied the quality measures to the results that were generated by the T-Recs system which we developed some years ago and which we started to further advance since a few months.", "Deep Convolutional Neural Networks (DCNNs) have recently been applied successfully to a variety of vision and multimedia tasks, thus driving development of novel solutions in several application domains. Document analysis is a particularly promising area for DCNNs: indeed, the number of available digital documents has reached unprecedented levels, and humans are no longer able to discover and retrieve all the information contained in these documents without the help of automation. Under this scenario, DCNNs offers a viable solution to automate the information extraction process from digital documents. Within the realm of information extraction from documents, detection of tables and charts is particularly needed as they contain a visual summary of the most valuable information contained in a document. For a complete automation of visual information extraction process from tables and charts, it is necessary to develop techniques that localize them and identify precisely their boundaries. In this paper we aim at solving the table chart detection task through an approach that combines deep convolutional neural networks, graphical models and saliency concepts. In particular, we propose a saliency-based fully-convolutional neural network performing multi-scale reasoning on visual cues followed by a fully-connected conditional random field (CRF) for localizing tables and charts in digital digitized documents. Performance analysis carried out on an extended version of ICDAR 2013 (with annotated charts as well as tables) shows that our approach yields promising results, outperforming existing models.", "Tables are a common structuring element in many documents, such as PDF files. To reuse such tables, appropriate methods need to be develop, which capture the structure and the content information. We have developed several heuristics which together recognize and decompose tables in PDF files and store the extracted data in a structured data format (XML) for easier reuse. Additionally, we implemented a prototype, which gives the user the ability of making adjustments on the extracted data. Our work shows that purely heuristic-based approaches can achieve good results, especially for lucid tables." ] }
1908.07801
2969390690
Instance segmentation requires a large number of training samples to achieve satisfactory performance and benefits from proper data augmentation. To enlarge the training set and increase the diversity, previous methods have investigated using data annotation from other domain (e.g. bbox, point) in a weakly supervised mechanism. In this paper, we present a simple, efficient and effective method to augment the training set using the existing instance mask annotations. Exploiting the pixel redundancy of the background, we are able to improve the performance of Mask R-CNN for 1.7 mAP on COCO dataset and 3.3 mAP on Pascal VOC dataset by simply introducing random jittering to objects. Furthermore, we propose a location probability map based approach to explore the feasible locations that objects can be placed based on local appearance similarity. With the guidance of such map, we boost the performance of R101-Mask R-CNN on instance segmentation from 35.7 mAP to 37.9 mAP without modifying the backbone or network structure. Our method is simple to implement and does not increase the computational complexity. It can be integrated into the training pipeline of any instance segmentation model without affecting the training and inference efficiency. Our code and models have been released at this https URL
Combining instance detection and semantic segmentation, instance segmentation @cite_45 @cite_19 @cite_4 @cite_29 @cite_39 @cite_5 @cite_11 @cite_27 is a much harder problem. Earlier methods either propose segmentation candidates followed by classification @cite_40 , or associate pixels on the semantic segmentation map into different instances @cite_6 . Recently, FCIS @cite_10 proposed the first fully convolutional end-to-end solution to instance segmentation, which predicted position-sensitive channels @cite_41 for instance segmentation. This idea is further developed by @cite_20 which outperforms competing methods on the COCO dataset @cite_9 . With the help of FPN @cite_24 and a precise pooling scheme named , He al @cite_33 proposed a two-step model Mask R-CNN that extends Faster R-CNN framework with a mask head and achieves state-of-the-art on instance segmentation @cite_42 and pose estimation @cite_12 tasks. Although these methods have reached impressive performance on public datasets, those heavy deep models are hungry for an extremely large number of training data, which is usually not available in real-world applications. Furthermore, the potential of large datasets are not fully exploited by existing training methods.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_41", "@cite_29", "@cite_9", "@cite_42", "@cite_6", "@cite_39", "@cite_24", "@cite_19", "@cite_27", "@cite_45", "@cite_40", "@cite_5", "@cite_10", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "", "", "2317851288", "", "2952122856", "", "2010792042", "", "2949533892", "", "", "2949295283", "809122546", "", "2949259132", "2902407103", "2964236837", "" ], "abstract": [ "", "", "Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO.", "", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "", "In recent years, the watershed line has emerged as the primary tool of mathematical morphology for image segmentation. Several very efficient algorithms have been devised for the determination of watersheds. Nevertheless, the application of watershed algorithms to an image is often disappointing: the image is oversegmented into a large number of tiny, shallow watersheds, where one wanted to obtain only a few deep ones. This paper presents a novel approach to watershed merging. Mainly, it addresses the following question: given an image, what is the closest image that has a simpler watershed structure? The basic idea is to replicate the process of watershed merging that takes place when rain falls over a real landscape: smaller watersheds progressively fill until an overflow occurs. The water then flows to a nearby, larger or deeper watershed, in which the overflown watersheds are merged. The methods presented in this paper apply the minimum extensive modifications possible to a given image to obtain a new one that has many fewer watersheds but is still “close” to the original. Their usefulness is demonstrated for several biomedical applications.", "", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "", "", "Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.", "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.", "", "We present the first fully convolutional end-to-end solution for instance-aware semantic segmentation task. It inherits all the merits of FCNs for semantic segmentation and instance mask proposal. It performs instance mask prediction and classification jointly. The underlying convolutional representation is fully shared between the two sub-tasks, as well as between all regions of interest. The proposed network is highly integrated and achieves state-of-the-art performance in both accuracy and efficiency. It wins the COCO 2016 segmentation competition by a large margin. Code would be released at this https URL .", "Multi-person pose estimation is fundamental to many computer vision tasks and has made significant progress in recent years. However, few previous methods explored the problem of pose estimation in crowded scenes while it remains challenging and inevitable in many scenarios. Moreover, current benchmarks cannot provide an appropriate evaluation for such cases. In this paper, we propose a novel and efficient method to tackle the problem of pose estimation in the crowd and a new dataset to better evaluate algorithms. Our model consists of two key components: joint-candidate single person pose estimation (SPPE) and global maximum joints association. With multi-peak prediction for each joint and global association using graph model, our method is robust to inevitable interference in crowded scenes and very efficient in inference. The proposed method surpasses the state-of-the-art methods on CrowdPose dataset by 5.2 mAP and results on MSCOCO dataset demonstrate the generalization ability of our method. Source code and dataset will be made publicly available.", "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "" ] }
1908.07820
2969722339
Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks. It has shown great success in natural language processing (NLP). Currently, a number of MLT architectures and learning mechanisms have been proposed for various NLP tasks. However, there is no systematic exploration and comparison of different MLT architectures and learning mechanisms for their strong performance in-depth. In this paper, we conduct a thorough examination of typical MTL methods on a broad range of representative NLP tasks. Our primary goal is to understand the merits and demerits of existing MTL methods in NLP tasks, thus devising new hybrid architectures intended to combine their strengths.
Multi-task learning with deep neural networks has gained increasing attention within NLP community over the past decades. @cite_3 and @cite_21 described most of the existing techniques for multi-task learning in deep neural networks. Generally, existing MTL methods can be categorised as @cite_29 @cite_10 and @cite_20 @cite_32 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_32", "@cite_3", "@cite_10", "@cite_20" ], "mid": [ "2963877604", "2742079690", "2914526845", "2624871570", "", "2308486447" ], "abstract": [ "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.", "Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.", "In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains. MT-DNN extends the model proposed in (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (, 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7 (2.2 absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. The code and pre-trained models are publicly available at this https URL.", "Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.", "", "We present a deep hierarchical recurrent neural network for sequence tagging. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels to encode morphology and context information, and applies a conditional random field layer to predict the tags. Our model is task independent, language independent, and feature engineering free. We further extend our model to multi-task and cross-lingual joint training by sharing the architecture and parameters. Our model achieves state-of-the-art results in multiple languages on several benchmark tasks including POS tagging, chunking, and NER. We also demonstrate that multi-task and cross-lingual joint training can improve the performance in various cases." ] }
1908.07752
2969591045
Clinicians and other analysts working with healthcare data are in need for better support to cope with large and complex data. While an increasing number of visual analytics environments integrates explicit domain knowledge as a means to deliver a precise representation of the available data, theoretical work so far has focused on the role of knowledge in the visual analytics process. There has been little discussion about how such explicit domain knowledge can be structured in a generalized framework. This paper collects desiderata for such a structural framework, proposes how to address these desiderata based on the model of linked data, and demonstrates the applicability in a visual analytics environment for physiotherapy.
Since Illuminating the Path'' [p. ,35] thomas_2005_illuminating , incorporating prior domain knowledge and build[ing] knowledge structures'' has been on VA's agenda. This is underscored by the pivotal position of knowledge in the VA process model by @cite_40 @cite_36 and further process models such as the knowledge generation model by @cite_47 and the visualization model by van Wijk @cite_50 . However, these process models do not differentiate between knowledge in the human space and in the machine space. Based on @cite_17 , Federico, @cite_4 delineate tacit knowledge that is exclusively available to human reasoning, from explicit knowledge that can be leveraged by the VA environment. How explicit knowledge is integrated into the VA process is formalized in several recent models by @cite_17 , @cite_20 , and Federico, @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_36", "@cite_40", "@cite_50", "@cite_47", "@cite_20", "@cite_17" ], "mid": [ "2905763413", "1506965148", "2097481508", "2142068257", "2090687959", "2294575332", "197746879" ], "abstract": [ "Visual Analytics (VA) aims to combine the strengths of humans and computers for effective data analysis. In this endeavor, humans’ tacit knowledge from prior experience is an important asset that can be leveraged by both human and computer to improve the analytic process. While VA environments are starting to include features to formalize, store, and utilize such knowledge, the mechanisms and degree in which these environments integrate explicit knowledge varies widely. Additionally, this important class of VA environments has never been elaborated on by existing work on VA theory. This paper proposes a conceptual model of Knowledge-assisted VA conceptually grounded on the visualization model by van Wijk. We apply the model to describe various examples of knowledge-assisted VA from the literature and elaborate on three of them in finer detail. Moreover, we illustrate the utilization of the model to compare different design alternatives and to evaluate existing approaches with respect to their use of knowledge. Finally, the model can inspire designers to generate novel VA environments using explicit knowledge effectively.", "", "In today's applications data is produced at unprecedented rates. While the capacity to collect and store new data rapidly grows, the ability to analyze these data volumes increases at much lower rates. This gap leads to new challenges in the analysis process, since analysts, decision makers, engineers, or emergency response teams depend on information hidden in the data. The emerging field of visual analytics focuses on handling these massive, heterogenous, and dynamic volumes of information by integrating human judgement by means of visual representations and interaction techniques in the analysis process. Furthermore, it is the combination of related research areas including visualization, data mining, and statistics that turns visual analytics into a promising field of research. This paper aims at providing an overview of visual analytics, its scope and concepts, addresses the most important research challenges and presents use cases from a wide variety of application scenarios.", "The field of visualization is getting mature. Many problems have been solved, and new directions are sought for. In order to make good choices, an understanding of the purpose and meaning of visualization is needed. Especially, it would be nice if we could assess what a good visualization is. In this paper an attempt is made to determine the value of visualization. A technological viewpoint is adopted, where the value of visualization is measured based on effectiveness and efficiency. An economic model of visualization is presented, and benefits and costs are established. Next, consequences (brand limitations of visualization are discussed (including the use of alternative methods, high initial costs, subjective less, and the role of interaction), as well as examples of the use of the model for the judgement of existing classes of methods and understanding why they are or are not used in practice. Furthermore, two alternative views on visualization are presented and discussed: viewing visualization as an art or as a scientific discipline. Implications and future directions are identified.", "Visual analytics enables us to analyze huge information spaces in order to support complex decision making and data exploration. Humans play a central role in generating knowledge from the snippets of evidence emerging from visual data analysis. Although prior research provides frameworks that generalize this process, their scope is often narrowly focused so they do not encompass different perspectives at different levels. This paper proposes a knowledge generation model for visual analytics that ties together these diverse frameworks, yet retains previously developed models (e.g., KDD process) to describe individual segments of the overall visual analytic processes. To test its utility, a real world visual analytics system is compared against the model, demonstrating that the knowledge generation process model provides a useful guideline when developing and evaluating such systems. The model is used to effectively compare different data analysis systems. Furthermore, the model provides a common language and description of visual analytic processes, which can be used for communication between researchers. At the end, our model reflects areas of research that future researchers can embark on.", "We take a visual analytics approach towards an operational model of the human-computer system. In particular, the approach combines ideas from (human-centered) interactive visualization and cognitive science. The model we derive is a first step on the path to a more complete evaluated and validated model. However, even at this stage important principles can be extracted for visual analytics systems that closely couple automated analyses with human analytic reasoning and decision-making. These improved systems can then be applied effectively to difficult, open-ended problems involving complex data. Another advantage of this approach is that specific gaps are revealed in both visual analytics methods and cognitive science understanding that must be filled in order to create the most effective systems. Related to this is that the resulting visual analytics systems built upon the human-computer model will provide testbeds to further evaluate and extend cognitive science principles.", "Knowledge-assisted visualization has been a fast growing field because it directly integrates and utilizes domain knowledge to produce effective data visualization. However, most existing knowledge-assisted visualization applications focus on integrating domain knowledge that is tailored only for specific analytical tasks. This reflects not only the different understandings of what ''knowledge'' is in visualization, but also the difficulties in generalizing and reapplying knowledge to new problems or domains. In this paper, we differentiate knowledge into two types, tacit and explicit, and suggest four conversion processes between them (internalization, externalization, collaboration, and combination) that could be included in knowledge-assisted visualizations. We demonstrate the applications of these four processes in a bridge visual analytical system for the US Department of Transportation and discuss their roles and utilities in real-life scenarios." ] }
1908.07752
2969591045
Clinicians and other analysts working with healthcare data are in need for better support to cope with large and complex data. While an increasing number of visual analytics environments integrates explicit domain knowledge as a means to deliver a precise representation of the available data, theoretical work so far has focused on the role of knowledge in the visual analytics process. There has been little discussion about how such explicit domain knowledge can be structured in a generalized framework. This paper collects desiderata for such a structural framework, proposes how to address these desiderata based on the model of linked data, and demonstrates the applicability in a visual analytics environment for physiotherapy.
Beyond the role of knowledge in the VA process, only few works discuss the content and structure of explicit knowledge on a general level. @cite_7 conceptualize domain knowledge as a model of a part of reality and provide definitions for different types of models but they do not specify the form and medium how the model is represented. @cite_33 formalize data descriptors that include domain knowledge about data. Tominski @cite_26 captures domain knowledge as event types that are specified using predicate logic. Lammarsch at al. @cite_43 propose a data structure for knowledge about temporal patterns leveraging the structure of time. The generation of adapted visualizations which are based on ontological datasets and the specification of ontological mappings are treated by @cite_16 . Therefore, they use the COGZ tool, converting ontological mappings in software transformation rules so that it describes a model which fits the visualization. A similar approach for adapted visualizations is also followed by @cite_53 , describing a general system pipeline which combines ontology mapping and probabilistic reasoning techniques. Thereby, they describe the automated generation of visualizations of domain-specific data from the web. However, none of these approaches aim for a general framework.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_7", "@cite_53", "@cite_43", "@cite_16" ], "mid": [ "2169236565", "2534686024", "2788844524", "2042724135", "1594602328", "2098960900" ], "abstract": [ "Visualization has become an increasingly important tool to support exploration and analysis of the large volumes of data we are facing today. However, interests and needs of users are still not being considered sufficiently. The goal of this work is to shift the user into the focus. To that end, we apply the concept of event-based visualization that combines event-based methodology and visualization technology. Previous approaches that make use of events are mostly specific to a particular application case, and hence, can not be applied otherwise. We introduce a novel general model of eventbased visualization that comprises three fundamental stages. (I) Users are enabled to specify what their interests are. (2) During visualization, matches of these interests are sought in the data. (3) It is then possible to automatically adjust visual representations according to the detected matches. This way, it is possible to generate visual representations that better reflect what users need for their task at hand. The model's generality allows its application in many visualization contexts. We substantiate the general model with specific datadriven events that focus on relational data so prevalent in today's visualization scenarios. We show how the developed methods and concepts can be implemented in an interactive event-based visualization framework, which includes event-enhanced visualizations for temporal and spatio-temporal data.", "Visualization has become an important ingredient of data analysis, supporting users in exploring data and confirming hypotheses. At the beginning of a visual data analysis process, data characteristics are often assessed in an initial data profiling step. These include, for example, statistical properties of the data and information on the data’s well-formedness, which can be used during the subsequent analysis to adequately parametrize views and to highlight or exclude data items. We term this information data descriptors, which can span such diverse aspects as the data’s provenance, its storage schema, or its uncertainties. Gathered descriptors encapsulate basic knowledge about the data and can thus be used as objective starting points for the visual analysis process. In this article, we bring together these different aspects in a systematic form that describes the data itself (e.g. its content and context) and its relation to the larger data gathering and visual analysis process (e.g. its provenance an...", "To complement the currently existing definitions and conceptual frameworks of visual analytics, which focus mainly on activities performed by analysts and types of techniques they use, we attempt to define the expected results of these activities. We argue that the main goal of doing visual analytics is to build a mental and or formal model of a certain piece of reality reflected in data. The purpose of the model may be to understand, to forecast or to control this piece of reality. Based on this model-building perspective, we propose a detailed conceptual framework in which the visual analytics process is considered as a goal-oriented workflow producing a model as a result. We demonstrate how this framework can be used for performing an analytical survey of the visual analytics research field and identifying the directions and areas where further research is needed.", "In this paper, we propose a novel approach for automatic generation of visualizations from domain-specific data available on the web. We describe a general system pipeline that combines ontology mapping and probabilistic reasoning techniques. With this approach, a web page is first mapped to a Domain Ontology, which stores the semantics of a specific subject domain (e.g., music charts). The Domain Ontology is then mapped to one or more Visual Representation Ontologies, each of which captures the semantics of a visualization style (e.g., tree maps). To enable the mapping between these two ontologies, we establish a Semantic Bridging Ontology, which specifies the appropriateness of each semantic bridge. Finally each Visual Representation Ontology is mapped to a visualization using an external visualization toolkit. Using this approach, we have developed a prototype software tool, SemViz, as a realisation of this approach. By interfacing its Visual Representation Ontologies with public domain software such as ILOG Discovery and Prefuse, SemViz is able to generate appropriate visualizations automatically from a large collection of popular web pages for music charts without prior knowledge of these web pages.", "The primary goal of Visual Analytics (VA) is the close intertwinedness of human reasoning and automated methods. An important task for this goal is formulating a description for such a VA process. We propose the design of a VA process description that uses the inherent structure contained in time-oriented data as a way to improve the integration of human reasoning. This structure can, for example, be seen in the calendar aspect of time being composed of smaller granularities, like years and seasons. Domain experts strongly consider this structure in their reasoning, so VA needs to consider it, too.", "We explore how to support the creation of customized visualizations of ontology instance data through the specification of ontology mappings. We combine technologies from the disciplines of software modeling and ontology engineering. The feasibility of our approach is demonstrated by extending an existing ontology mapping tool, CogZ, to translate ontology mappings into software model transformation rules. The tool uses these transformations to automatically convert domain instance data into data that conforms to a model describing a visualization. After this transformation, a visualization of the domain instance data is generated." ] }
1908.07752
2969591045
Clinicians and other analysts working with healthcare data are in need for better support to cope with large and complex data. While an increasing number of visual analytics environments integrates explicit domain knowledge as a means to deliver a precise representation of the available data, theoretical work so far has focused on the role of knowledge in the visual analytics process. There has been little discussion about how such explicit domain knowledge can be structured in a generalized framework. This paper collects desiderata for such a structural framework, proposes how to address these desiderata based on the model of linked data, and demonstrates the applicability in a visual analytics environment for physiotherapy.
, it can be seen, that most of the discussed approaches cover how explicit domain knowledge can be exploited to enhance visual representation and data analysis; some approaches provide methods to generate explicit knowledge. Additionally, most of the currently implemented knowledge-assisted VA environments are focused on the integration of specific domain knowledge, which could only be used for precisely defined analysis tasks. In general, explicit knowledge is now a first-class artifact in the VA process but its form and structure are left unspecified. None of the presented approaches provides a structural framework for describing and storing explicit knowledge in VA environments. Thus, a structural framework is needed and combined with the theoretical process model by Federico, @cite_4 it would provide valuable generative guidelines for the development of novel knowledge-assisted VA environments.
{ "cite_N": [ "@cite_4" ], "mid": [ "2905763413" ], "abstract": [ "Visual Analytics (VA) aims to combine the strengths of humans and computers for effective data analysis. In this endeavor, humans’ tacit knowledge from prior experience is an important asset that can be leveraged by both human and computer to improve the analytic process. While VA environments are starting to include features to formalize, store, and utilize such knowledge, the mechanisms and degree in which these environments integrate explicit knowledge varies widely. Additionally, this important class of VA environments has never been elaborated on by existing work on VA theory. This paper proposes a conceptual model of Knowledge-assisted VA conceptually grounded on the visualization model by van Wijk. We apply the model to describe various examples of knowledge-assisted VA from the literature and elaborate on three of them in finer detail. Moreover, we illustrate the utilization of the model to compare different design alternatives and to evaluate existing approaches with respect to their use of knowledge. Finally, the model can inspire designers to generate novel VA environments using explicit knowledge effectively." ] }
1908.07516
2969539381
Neural network image reconstruction directly from measurement data is a growing field of research, but until now has been limited to producing small (e.g. 128x128) 2D images by the large memory requirements of the previously suggested networks. In order to facilitate further research with direct reconstruction, we developed a more efficient network capable of 3D reconstruction of Radon encoded data with a relatively large image matrix (e.g. 400x400). Our proposed network is able to produce image quality comparable to the benchmark Ordered Subsets Expectation Maximization (OSEM) algorithm. We address the most memory intensive aspect of transforming the data from sinogram space to image space through a specially designed Radon inversion layer. We insert this layer between an initial network segment designed to encode the sinogram input and an output segment designed to refine and scale the initial image estimate to produce the final image. We demonstrate 3D reconstructions comparable to OSEM for 1, 4, 8 and 16 slices with no modifications to the network's architecture, capacity or hyper-parameters on a data set of simulated PET whole-body scans. When batch operations are considered, this network can reconstruct an entire PET whole-body volume in a single pass or about one second. Although results in this paper are on PET data, the proposed methods would be equally applicable to X-ray CT or any other Radon encoded measurement data.
The terms deep learning and image reconstruction are often used in conjunction to describe a significant amount of recent research @cite_24 that most often falls into one of two categories: 1) combine deep learning with an analytical or statistical method such as using a deep learning prior @cite_13 or regularization term @cite_16 , or 2) use neural networks as a nonlinear filter for denoising @cite_3 @cite_13 , artifact mitigation @cite_22 and other post reconstruction tasks. Neural network image formation directly from measurement data by contrast is significantly less common.
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_24", "@cite_16", "@cite_13" ], "mid": [ "2753044865", "2810690955", "2803224943", "2800924046", "2795777276" ], "abstract": [ "In the presence of met al implants, met al artifacts are introduced to x-ray computed tomography CT images. Although a large number of met al artifact reduction (MAR) methods have been proposed in the past decades, MAR is still one of the major problems in clinical x-ray CT. In this paper, we develop a convolutional neural network (CNN)-based open MAR framework, which fuses the information from the original and corrected images to suppress artifacts. The proposed approach consists of two phases. In the CNN training phase, we build a database consisting of met al-free, met al-inserted and pre-corrected CT images, and image patches are extracted and used for CNN training. In the MAR phase, the uncorrected and pre-corrected images are used as the input of the trained CNN to generate a CNN image with reduced artifacts. To further reduce the remaining artifacts, water equivalent tissues in a CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the met al-affected projections, followed by the FBP reconstruction. The effectiveness of the proposed method is validated on both simulated and real data. Experimental results demonstrate the superior MAR capability of the proposed method to its competitors in terms of artifact suppression and preservation of anatomical structures in the vicinity of met al implants.", "", "Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of “Machine learning for image reconstruction.” This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme “Deep learning in medical imaging” [item 2) in the Appendix]. While the previous special issue targeted medical image processing analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data features to reconstructed images and then extracted diagnostic features readings.", "Cone-beam computed tomography (CBCT) plays an important role in radiation therapy. Statistical iterative reconstruction (SIR) algorithms with specially designed penalty terms provide good performance for low-dose CBCT imaging. Among others, the total variation (TV) penalty is the current state-of-the-art in removing noises and preserving edges, but one of its well-known limitations is its staircase effect. Recently, various penalty terms with higher order differential operators were proposed to replace the TV penalty to avoid the staircase effect, at the cost of slightly blurring object edges. We developed a novel SIR algorithm using a neural network for CBCT reconstruction. We used a data-driven method to learn the “potential regularization term” rather than design a penalty term manually. This approach converts the problem of designing a penalty term in the traditional statistical iterative framework to designing and training a suitable neural network for CBCT reconstruction. We proposed using transfer learning to overcome the data deficiency problem and an iterative deblurring approach specially designed for the CBCT iterative reconstruction process during which the noise level and resolution of the reconstructed images may change. Through experiments conducted on two physical phantoms, two simulation digital phantoms, and patient data, we demonstrated the excellent performance of the proposed network-based SIR for CBCT reconstruction, both visually and quantitatively. Our proposed method can overcome the staircase effect, preserve both edges and regions with smooth intensity transition, and provide reconstruction results at high resolution and low noise level.", "Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images." ] }
1908.07516
2969539381
Neural network image reconstruction directly from measurement data is a growing field of research, but until now has been limited to producing small (e.g. 128x128) 2D images by the large memory requirements of the previously suggested networks. In order to facilitate further research with direct reconstruction, we developed a more efficient network capable of 3D reconstruction of Radon encoded data with a relatively large image matrix (e.g. 400x400). Our proposed network is able to produce image quality comparable to the benchmark Ordered Subsets Expectation Maximization (OSEM) algorithm. We address the most memory intensive aspect of transforming the data from sinogram space to image space through a specially designed Radon inversion layer. We insert this layer between an initial network segment designed to encode the sinogram input and an output segment designed to refine and scale the initial image estimate to produce the final image. We demonstrate 3D reconstructions comparable to OSEM for 1, 4, 8 and 16 slices with no modifications to the network's architecture, capacity or hyper-parameters on a data set of simulated PET whole-body scans. When batch operations are considered, this network can reconstruct an entire PET whole-body volume in a single pass or about one second. Although results in this paper are on PET data, the proposed methods would be equally applicable to X-ray CT or any other Radon encoded measurement data.
Early research in this area was based on networks of fully connected multilayer perceptrons @cite_11 @cite_12 @cite_17 @cite_14 that yielded promising results, but only for very low resolution reconstructions. More recent efforts have capitalized on the growth of computational resources, especially in the area of GPUs, and developed deep neural networks capable of direct reconstruction. The AUTOMAP network @cite_15 is one recent example that utilizes multiple fully connected layers followed by a sparse encoder-decoder to learn a mapping manifold from measurement space to image space. This network is capable of learning a general solution to the reconstruction inverse problem, but this generality causes inefficiency requiring a high number of parameters and limiting the application to small 2D images (128x128). DeepPET @cite_27 is another direct reconstruction neural network with an encoder-decoder architecture that forgoes any fully connected layers. They utilize convolutional layers to encode the sinogram input (288x269) into a higher dimensional feature vector representation (1024x18x17) that is then decoded with convolutional layers into a 2D image (128x128). While both of these novel methods are significant advancements in direct neural network reconstruction, memory space requirements severely limit the size of the images they can produce.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_27", "@cite_15", "@cite_12", "@cite_17" ], "mid": [ "2544373769", "2538611499", "2963633304", "2611467245", "", "" ], "abstract": [ "A new approach for tomographic image reconstruction from projections using Artificial Neural Network (ANN) techniques is presented in this work. The design of the proposed reconstruction system is based on simple but efficient network architecture, which best utilizes all available input information. Due to the computational complexity, which grows quadratically with the image size, the training phase of the system is characterized by relatively large CPU times. The trained network, on the contrary, is able to provide all necessary information in a quick and efficient way giving results comparable to other time consuming iterative reconstruction algorithms. The performance of the network studied with a large number of software phantoms is directly compared to other iterative and analytical techniques. For a given image size and projections number, the role of the hidden layers in the network architecture is examined and the quality dependence of the reconstructed image on the size of the geometrical patterns used in the training phase is also investigated. ANN based tomographic image reconstruction can be easily implemented in modern FPGA devices and can serve as a quick initialization method to other complicated and time consuming procedures.", "An artificial neural network has been developed to reconstruct quantitative single photon emission computed tomographic (SPECT) images. The network is trained using known projection-image pairs containing Poisson noise to learn a shift-invariant weighting (filter) which minimizes the mean squared error between the reconstructed image and the sample image. Once trained, the network produces a reconstructed image as its output when projection data are presented to its input. Supervised training with a modified delta rule was used to train this two-layer neural network having a backpropagation architecture with one hidden layer (the filtered projection). The system was trained for noise levels representative of 1000, 10k, and 1M counts per slice. The Fourier transform of the filtered projection (for an impulse function) is compared with the ramp filter used with backprojection. >", "Abstract The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder–decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11 53 lower than ordered subset expectation maximization (OSEM) filtered back-projection (FBP), structural similarity index (1 11 higher than OSEM FBP), and peak signal-to-noise ratio (1.1 3.8 dB higher than OSEM FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder–decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.", "Image reconstruction is reformulated using a data-driven, supervised machine learning framework that allows a mapping between sensor and image domains to emerge from even noisy and undersampled data, improving accuracy and reducing image artefacts.", "", "" ] }
1908.07705
2969301397
Dialogue state tracking (DST) is an essential component in task-oriented dialogue systems, which estimates user goals at every dialogue turn. However, most previous approaches usually suffer from the following problems. Many discriminative models, especially end-to-end (E2E) models, are difficult to extract unknown values that are not in the candidate ontology; previous generative models, which can extract unknown values from utterances, degrade the performance due to ignoring the semantic information of pre-defined ontology. Besides, previous generative models usually need a hand-crafted list to normalize the generated values. How to integrate the semantic information of pre-defined ontology and dialogue text (heterogeneous texts) to generate unknown values and improve performance becomes a severe challenge. In this paper, we propose a Copy-Enhanced Heterogeneous Information Learning model with multiple encoder-decoder for DST (CEDST), which can effectively generate all possible values including unknown values by copying values from heterogeneous texts. Meanwhile, CEDST can effectively decompose the large state space into several small state spaces through multi-encoder, and employ multi-decoder to make full use of the reduced spaces to generate values. Multi-encoder-decoder architecture can significantly improve performance. Experiments show that CEDST can achieve state-of-the-art results on two datasets and our constructed datasets with many unknown values.
As far as we know, @cite_4 is the only work using the discriminative model to handle the dynamic and unbounded value set. @cite_4 represents the dialogue states by candidate sets derived from the dialogue and knowledge, then scores values in the candidate set with binary classifications. Although the sophisticated generation strategy of the candidate set allows the model to extract unknown values, @cite_4 needs a separate SLU module and may propagate errors.
{ "cite_N": [ "@cite_4" ], "mid": [ "2757930983" ], "abstract": [ "Dialogue state tracking (DST) is a key component of task-oriented dialogue systems. DST estimates the user's goal at each user turn given the interaction until then. State of the art approaches for state tracking rely on deep learning methods, and represent dialogue state as a distribution over all possible slot values for each slot present in the ontology. Such a representation is not scalable when the set of possible values are unbounded (e.g., date, time or location) or dynamic (e.g., movies or usernames). Furthermore, training of such models requires labeled data, where each user turn is annotated with the dialogue state, which makes building models for new domains challenging. In this paper, we present a scalable multi-domain deep learning based approach for DST. We introduce a novel framework for state tracking which is independent of the slot value set, and represent the dialogue state as a distribution over a set of values of interest (candidate set) derived from the dialogue history or knowledge. Restricting these candidate sets to be bounded in size addresses the problem of slot-scalability. Furthermore, by leveraging the slot-independent architecture and transfer learning, we show that our proposed approach facilitates quick adaptation to new domains." ] }
1908.07674
2969763518
A significantly low cost and tractable progressive learning approach is proposed and discussed for efficient spatiotemporal monitoring of a completely unknown, two dimensional correlated signal distribution in localized wireless sensor field. The spatial distribution is compressed into a number of its contour lines and only those sensors that their sensor observations are in a @math margin of the contour levels are reporting to the information fusion center (IFC). The proposed algorithm progressively finds the model parameters in iterations, by using extrapolation in curve fitting, and stochastic gradient method for spatial monitoring. The IFC tracks the signal variations using these parameters, over time. The monitoring performance and the cost of the proposed algorithm are discussed, in this letter.
Contour line detection in wireless sensor networks, which is the first step in modeling the spatial distribution has been addressed in several researches, including @cite_13 @cite_31 @cite_1 @cite_17 @cite_18 @cite_15 @cite_26 @cite_30 @cite_12 @cite_20 @cite_16 . Most of these mentioned researches addressed to distributed-contour-detection that is based on collaboration among sensors for detection of the contour lines. In this letter, we propose a cost efficient centralized algorithm based on the approach proposed in @cite_13 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_15", "@cite_1", "@cite_16", "@cite_31", "@cite_20", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2125380743", "2136684675", "", "2125456263", "2002066573", "2115930054", "2085176360", "", "2774751177", "1580855309", "" ], "abstract": [ "A robust filter-based approach is proposed for wireless sensor networks for detecting contours of a signal distribution over a 2-dimensional region. The motivation for contour detection is derived from applications where the spatial distribution of a signal (such as temperature, soil moisture level, etc.) is to be determined over a large region with minimum communication cost. The proposed scheme applies multi-level quantization to the sensor signal values to artificially create an edge and then applies spatial filtering for edge detection. The spatial filter is localized and is based on an adaptation of the Prewitt filter used in image processing. Appropriate mechanisms are introduced that minimizes the cost for communication required for collaboration. Simulation results are presented to show the error performance of the proposed contour detection scheme and the associated communication cost (single-hop communications with immediate neighborhood in average) in the network.", "An algorithm to extract contour lines using wireless sensor networks is proposed in this work. In con- trast with previous work on edge detection that is primarily concerned with the region of a certain phenomenon, contour lines offer more detailed information about the underlying phenomenon such as signal's amplitude, density and source location. A distributed algorithm to extract the contour lines information from local measurements is developed so that the phenomenon can be monitored in the basestation without demanding excessive raw data transmission. Sim- ulation results are provided to demonstrate the efficiency of the proposed algorithm. Furthermore, a thorough per- formance analysis is conducted to understand the effects of the sensor density and the background noise power on the performance of the system.", "", "An algorithm to extract contour lines using wireless sensor networks is proposed for environmental monitoring in this work. In contrast to previous work on edge detection that is primarily concerned with the region of a certain phenomenon, contour lines offer more detailed information about the underlying phenomenon such as signal amplitude, density and source location. A distributed algorithm to extract the contour line information from local measurements is developed so that the phenomenon can be monitored in the basestation without demanding excessive raw data transmission. Simulation results are provided to demonstrate the efficiency of the proposed algorithm.", "A class of energy efficient schemes is proposed for detection and tracking of the boundaries of correlated spatial distributions, such as air pollution in large cities at a known threshold level, using large scale wireless sensor network. It is shown that without having the exact statistics of the signal, distributed collaborative space and time oriented filtering can be used to efficiently detect the boundary of the threshold level of a correlated spatial distribution over time. Collaborative signal processing is used to reduce the effect of noisy observations and distributed space and time domain filtering is applied for energy conservative threshold level tracking. It is shown that by using spatial and time oriented filtering, the operating mode of the wireless sensor network is shifted from communication dominant mode toward computation and sensing dominant mode, which significantly saves the in-network energy. The performance of the discussed approach for a few correlated random spatial distributions is evaluated using computer simulations.", "We study the problem of data-driven routing and navigation in a distributed sensor network over a continuous scalar field. Specifically, we address the problem of searching for the collection of sensors with readings within a specified range. This is named the iso-contour query problem. We develop a gradient based routing scheme such that from any query node, the query message follows the signal field gradient or derived quantities and successfully discovers all iso-contours of interest. Due to the existence of local maxima and minima, the guaranteed delivery requires preprocessing of the signal field and the construction of a contour tree in a distributed fashion. Our approach has the following properties: (i) the gradient routing uses only local node information and its message complexity is close to optimal, as shown by simulations; (ii) the preprocessing message complexity is linear in the number of nodes and the storage requirement for each node is a small constant. The same preprocessing also facilitates route computation between any pair of nodes where the the route lies within any user supplied range of values.", "This paper presents algorithms for efficiently detecting the variation of a distributed signal over space and time using large scale wireless sensor networks. The proposed algorithms use contours for estimating the spatial distribution of a signal. A contour tracking algorithm is proposed to efficiently monitor the variations of the contours with time. Use of contours reduces the communication cost by reducing the participation of sensor nodes for the monitoring tasks. The proposed schemes use multi-sensor collaboration techniques and non-uniform contour levels to reduce the error in reconstructing the signal distribution. Results from computer simulations are presented to demonstrate the performance of the proposed schemes.", "", "A novel on-demand, adaptive compressed sensing approach is proposed and discussed for spatial monitoring of correlated massive data in dense wireless sensor field. The spatial monitoring is based on collecting the sensor observations of a subset of sensors that their reading is in Δ margin of a set of contour levels. The contour level set is adaptively updated in order to reduce the mean square error of reconstruction of the spatial distribution. The reconstruction error and the number of involved sensing nodes as cost are used to compare the proposed approach with spatial monitoring using equally spaced contour levels.", "We propose a distributed scheme called Adaptive-Group-Merge for sensor networks that, given a parameter k, approximates a geometric shape by a k-vertex polygon. The algorithm is well suited to the distributed computing architecture of sensor networks, and we prove that its approximation quality is within a constant factor of the optimal. We also show through simulation that our scheme outperforms several other alternatives in preserving important shape features, and achieves approximation quality almost as good as the optimal, centralized scheme. Because many applications of sensor networks involve observations and monitoring of physical phenomena, the ability to represent complex geometric shapes faithfully but using small memory is vital in many settings.", "" ] }
1908.07674
2969763518
A significantly low cost and tractable progressive learning approach is proposed and discussed for efficient spatiotemporal monitoring of a completely unknown, two dimensional correlated signal distribution in localized wireless sensor field. The spatial distribution is compressed into a number of its contour lines and only those sensors that their sensor observations are in a @math margin of the contour levels are reporting to the information fusion center (IFC). The proposed algorithm progressively finds the model parameters in iterations, by using extrapolation in curve fitting, and stochastic gradient method for spatial monitoring. The IFC tracks the signal variations using these parameters, over time. The monitoring performance and the cost of the proposed algorithm are discussed, in this letter.
Spatial modeling of signal distributions using contour lines has been addressed in @cite_13 @cite_31 @cite_21 . Modeling the spatial distribution with uniformly spaced contour levels and tracking their variation using time-series analysis in sensors was studied in @cite_21 . Using non-uniformly spaced contour lines was reported first in @cite_31 . They assumed the probability density function () of the signal strength and used method to calculate the optimal sub-optimal contour levels. An iterative algorithm was proposed in @cite_13 to extract the of the signal strength with low cost in spatial monitoring of the signal distribution.
{ "cite_N": [ "@cite_31", "@cite_21", "@cite_13" ], "mid": [ "2085176360", "2147259327", "2774751177" ], "abstract": [ "This paper presents algorithms for efficiently detecting the variation of a distributed signal over space and time using large scale wireless sensor networks. The proposed algorithms use contours for estimating the spatial distribution of a signal. A contour tracking algorithm is proposed to efficiently monitor the variations of the contours with time. Use of contours reduces the communication cost by reducing the participation of sensor nodes for the monitoring tasks. The proposed schemes use multi-sensor collaboration techniques and non-uniform contour levels to reduce the error in reconstructing the signal distribution. Results from computer simulations are presented to demonstrate the performance of the proposed schemes.", "In many applications of sensor networks, the sink needs to keep track of the history of sensed data of a monitored region for scientific analysis or supporting historical queries. We call these historical data a time series of value distributions or snapshots. Obviously, to build the time series snapshots by requiring all of the sensors to transmit their data to the sink periodically is not energy efficient. In this paper, we introduce the idea of gradient boundary and propose the gradient boundary detection (GBD) algorithm to construct these time series snapshots of a monitored region. In GBD, a monitored region is partitioned into a set of subregions and all sensed data in one subregion are within a predefined value range, namely, the gradient interval. Sensors located on the boundaries of the subregions are required to transmit the data to the sink and, then, the sink recovers all subregions to construct snapshots of the monitored area. In this process, only the boundary sensors transmit their data and, therefore, energy consumption is greatly reduced. The simulation results show that GBD is able to build snapshots with a comparable accuracy and has up to 40 percent energy savings compared with the existing approaches for large gradient intervals.", "A novel on-demand, adaptive compressed sensing approach is proposed and discussed for spatial monitoring of correlated massive data in dense wireless sensor field. The spatial monitoring is based on collecting the sensor observations of a subset of sensors that their reading is in Δ margin of a set of contour levels. The contour level set is adaptively updated in order to reduce the mean square error of reconstruction of the spatial distribution. The reconstruction error and the number of involved sensing nodes as cost are used to compare the proposed approach with spatial monitoring using equally spaced contour levels." ] }
1908.07674
2969763518
A significantly low cost and tractable progressive learning approach is proposed and discussed for efficient spatiotemporal monitoring of a completely unknown, two dimensional correlated signal distribution in localized wireless sensor field. The spatial distribution is compressed into a number of its contour lines and only those sensors that their sensor observations are in a @math margin of the contour levels are reporting to the information fusion center (IFC). The proposed algorithm progressively finds the model parameters in iterations, by using extrapolation in curve fitting, and stochastic gradient method for spatial monitoring. The IFC tracks the signal variations using these parameters, over time. The monitoring performance and the cost of the proposed algorithm are discussed, in this letter.
Spatiotemporal modeling using machine learning approaches has been reported in several researches, including @cite_9 @cite_5 @cite_19 . In most of these approaches, neural networks algorithms, genetics algorithms, stochastic gradient descent algorithms, etc. are employed.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_9" ], "mid": [ "2606901901", "2114866007", "2732126315" ], "abstract": [ "Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between the neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a closed-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the state-of-the-art scalable extension of H.264 AVC and latest High Efficiency Video Coding (HEVC), standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest scalable extension of HEVC and HEVC simulcast over extensive test sequences with various resolutions.", "Harmful algal blooms (HABs) pose an enormous threat to the U.S. marine habitation and economy in the coastal waters. Federal and state coastal administrators have been devising a state-of-the-art monitoring and forecasting system for these HAB events. The efficacy of a monitoring and forecasting system relies on the performance of HAB detection. We propose a machine learning based spatio-temporal data mining approach for the detection of HAB events in the region of the Gulf of Mexico. In this study, a spatio-temporal cubical neighborhood around the training sample is introduced to retrieve relevant spectral information of both HAB and non-HAB classes. The feature relevance is studied through mutual information criterion to understand the important features in classifying HABs from non-HABs. Kernel based support vector machine is used as a classifier in the detection of HABs. This approach gives a significant performance improvement by reducing the false alarm rate. Further, with the achieved classification accuracy, the seasonal variations and sequential occurrence of algal blooms are predicted from spatio-temporal datasets. New variability visualization is introduced to illustrate the dynamic behavior of HABs across space and time.", "We demonstrate accurate spatio-temporal gait data classification from raw tomography sensor data without the need to reconstruct images. This is based on a simple yet efficient machine learning methodology based on a convolutional neural network architecture for learning spatio-temporal features, automatically end-to-end from raw sensor data. In a case study on a floor pressure tomography sensor, experimental results show an effective gait pattern classification F -score performance of 97.88 @math 1.70 . It is shown that the automatic extraction of classification features from raw data leads to a substantially better performance, compared to features derived by shallow machine learning models that use the reconstructed images as input, implying that for the purpose of automatic decision-making it is possible to eliminate the image reconstruction step. This approach is portable across a range of industrial tasks that involve tomography sensors. The proposed learning architecture is computationally efficient, has a low number of parameters and is able to achieve reliable classification F -score performance from a limited set of experimental samples. We also introduce a floor sensor dataset of 892 samples, encompassing experiments of 10 manners of walking and 3 cognitive-oriented tasks to yield a total of 13 types of gait patterns." ] }
1908.07489
2969242814
Online platforms, such as Airbnb, this http URL, Amazon, Uber and Lyft, can control and optimize many aspects of product search to improve the efficiency of marketplaces. Here we focus on a common model, called the discriminatory control model, where the platform chooses to display a subset of sellers who sell products at prices determined by the market and a buyer is interested in buying a single product from one of the sellers. Under the commonly-used model for single product selection by a buyer, called the multinomial logit model, and the Bertrand game model for competition among sellers, we show the following result: to maximize social welfare, the optimal strategy for the platform is to display all products; however, to maximize revenue, the optimal strategy is to only display a subset of the products whose qualities are above a certain threshold. We extend our results to Cournot competition model, and show that the optimal search segmentation mechanisms for both social welfare maximization and revenue maximization also have simple threshold structures. The threshold in each case depends on the quality of all products, the platform's objective and seller's competition model, and can be computed in linear time in the number of products.
Bertrand competition, proposed by Joseph Bertrand in 1883, and Cournot competition, introduced in 1838 by Antoine Augustin Cournot, are fundamental economic models that represent sellers competing in a single market, and have been studied comprehensively in economics. Due to the motivation that many sellers compete in more than one market in modern dynamic and diverse economy, a recent and growing literature has studied Cournot competitions in network environments @cite_30 @cite_2 @cite_20 @cite_21 . The work @cite_30 @cite_2 focused on characterizing and computing Nash equilibria, and investigated the impact of changes in the (bipartite) network structure on seller's profit and buyer's surplus. @cite_20 and @cite_21 analyzed the efficiency loss of networked Cournot competition game via the metric of price of anarchy. While all these previous works focused on the objective of social welfare maximization in networked Cournot competition, we consider the objective of both social welfare and revenue maximization in the networked Bertrand competition, and also in the networked Cournot competition. We further provide efficient segmenting mechanisms to optimize the social welfare revenue under the Nash equilibrium.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_20", "@cite_2" ], "mid": [ "1555547018", "2764126775", "2784228102", "2915287126" ], "abstract": [ "Cournot competition, introduced in 1838 by Antoine Augustin Cournot, is a fundamental economic model that represents firms competing in a single market of a homogeneous good. Each firm tries to maximize its utility—naturally a function of the production cost as well as market price of the product—by deciding on the amount of production. This problem has been studied comprehensively in Economics and Game Theory; however, in today’s dynamic and diverse economy, many firms often compete in more than one market simultaneously, i.e., each market might be shared among a subset of these firms. In this situation, a bipartite graph models the access restriction where firms are on one side, markets are on the other side, and edges demonstrate whether a firm has access to a market or not. We call this game Network Cournot Competition (NCC). Computation of equilibrium, taking into account a network of markets and firms and the different forms of cost and price functions, makes challenging and interesting new problems.", "This paper studies how the efficiency of an online platform is impacted by the degree to which access of platform participants is open or controlled. The study is motivated by an emerging trend within platforms to impose increasingly finegrained control over the options available to platform participants. While early online platforms allowed open access, e.g., Ebay allows any seller to interact with any buyer; modern platforms often impose matches directly, e.g., Uber directly matches drivers to riders. This control is performed with the goal of achieving more efficient market outcomes. However, the results in this paper highlight that imposing matches may create new strategic incentives that lead to increased inefficiency. In particular, in the context of networked Cournot competition, we prove that open access platforms guarantee social welfare within 7 16 of the optimal; whereas controlled allocation platforms can have social welfare unboundedly worse than optimal.", "This paper studies network design and efficiency loss in online platforms using the model of networked Cournot competition. We consider two styles of platforms: open access platforms and discriminatory access platforms. In open access platforms, every firm can connect to every market, while discriminatory access platforms limit connections between firms and markets in order to improve social welfare. Our results provide tight bounds on the efficiency loss of both open access and discriminatory access platforms. For open access platforms, we show that the efficiency loss at a Nash equilibrium is upper bounded by 3 2. In the case of discriminatory access platforms, we prove that, under an assumption on the linearity of cost functions, a greedy algorithm for optimizing network connections can guarantee the efficiency loss at a Nash equilibrium is upper bounded by 4 3.", "The paper considers a model of competition among firms that produce a homogeneous good in a networked environment. A bipartite graph determines which subset of markets a firm can supply to. Firms c..." ] }
1908.07567
2969713569
The emerging parallel chain protocols represent a breakthrough to address the scalability of blockchain. By composing multiple parallel chain instances, the whole systems' throughput can approach the network capacity. How to coordinate different chains' blocks and to construct them into a global ordering is critical to the performance of parallel chain protocol. However, the existed solutions use either the global synchronization clock with the single-chain bottleneck or pre-defined ordering sequences with distortion of blocks' causality to order blocks. In addition, the prior ordering methods rely on that honest participants faithfully follow the ordering protocol, but remain silent for any denial of ordering (DoR) attack. On the other hand, the conflicting transactions included into the global block sequence will make Simple Payment Verification (SPV) difficult. Clients usually need to store a full record of transactions to distinguish the conflictions and tell whether transactions are confirmed. However, the requirement for a full record will greatly hinder blockchains' application, especially for mobile scenarios. In this technical report, we propose Eunomia, which leverages logical clock and fine-grained UTXO sharding to realize a simple, efficient, secure and permissionless parallel chain protocol. By observing the characteristics of the parallel chain, we find the blocks ordering issue in parallel chain has many similarities with the event ordering in the distributed system. Eunomia thus adopts "virtual" logical clock, which is optimized to have the minimum protocol overhead and runs in a distributed way. In addition, Eunomia combines the mining incentive with block ordering, providing incentive compatibility against DoR attack. What's more, the fine-grained UTXO sharding does well solve the conflicting transactions in parallel chain and is shown to be SPV-friendly.
In @cite_39 , Jiaping Wang and Hao Wang propose a protocol called Monoxide, which composes multiple independent single chain consensus systems, called zones. They also proposed eventual atomicity to ensure transaction atomicity across zones and Chu-ko-nu mining to ensure the effective mining power in each zone. Monoxide is shown to provide @math throughput and @math capacity over Bitcoin and Ethereum. Except for this work, there are some committee-based sharding protocols @cite_29 @cite_52 @cite_41 . Each shard is assigned a subset of the nodes, and nodes run the classical byzantine agreement (BA) to reach an agreement. However, these protocols only can tolerant up to @math adversaries. What's more, all sharding-based protocols have additional overhead and latency for cross-shard transactions.
{ "cite_N": [ "@cite_41", "@cite_29", "@cite_52", "@cite_39" ], "mid": [ "2534313446", "2897676665", "2794533297", "2917183287" ], "abstract": [ "The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network.", "A major approach to overcoming the performance and scalability limitations of current blockchain protocols is to use sharding which is to split the overheads of processing transactions among multiple, smaller groups of nodes. These groups work in parallel to maximize performance while requiring significantly smaller communication, computation, and storage per node, allowing the system to scale to large networks. However, existing sharding-based blockchain protocols still require a linear amount of communication (in the number of participants) per transaction, and hence, attain only partially the potential benefits of sharding. We show that this introduces a major bottleneck to the throughput and latency of these protocols. Aside from the limited scalability, these protocols achieve weak security guarantees due to either a small fault resiliency (e.g., 1 8 and 1 4) or high failure probability, or they rely on strong assumptions (e.g., trusted setup) that limit their applicability to mainstream payment systems. We propose RapidChain, the first sharding-based public blockchain protocol that is resilient to Byzantine faults from up to a 1 3 fraction of its participants, and achieves complete sharding of the communication, computation, and storage overhead of processing transactions without assuming any trusted setup. RapidChain employs an optimal intra-committee consensus algorithm that can achieve very high throughputs via block pipelining, a novel gossiping protocol for large blocks, and a provably-secure reconfiguration mechanism to ensure robustness. Using an efficient cross-shard transaction verification technique, our protocol avoids gossiping transactions to the entire network. Our empirical evaluations suggest that RapidChain can process (and confirm) more than 7,300 tx sec with an expected confirmation latency of roughly 8.7 seconds in a network of 4,000 nodes with an overwhelming time-to-failure of more than 4,500 years.", "Designing a secure permissionless distributed ledger (blockchain) that performs on par with centralized payment processors, such as Visa, is a challenging task. Most existing distributed ledgers are unable to scale-out, i.e., to grow their total processing capacity with the number of validators; and those that do, compromise security or decentralization. We present OmniLedger, a novel scale-out distributed ledger that preserves longterm security under permissionless operation. It ensures security and correctness by using a bias-resistant public-randomness protocol for choosing large, statistically representative shards that process transactions, and by introducing an efficient cross-shard commit protocol that atomically handles transactions affecting multiple shards. OmniLedger also optimizes performance via parallel intra-shard transaction processing, ledger pruning via collectively-signed state blocks, and low-latency \"trust-but-verify\" validation for low-value transactions. An evaluation of our experimental prototype shows that OmniLedger’s throughput scales linearly in the number of active validators, supporting Visa-level workloads and beyond, while confirming typical transactions in under two seconds.", "" ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Action recognition in the deep learning era has been successfully tackled with 2D @cite_12 @cite_1 and 3D CNNs @cite_31 @cite_29 @cite_0 @cite_28 @cite_40 @cite_15 @cite_30 . Most existing works focus on modeling of motion and temporal structures. @cite_12 the optical flow CNN is introduced to model short-term motion patterns. TSN @cite_7 models long-range temporal structures using a sparse segment sampling in the whole video during training. 3D CNN based models @cite_31 @cite_0 @cite_28 @cite_15 tackle the temporal modeling using the added dimension of the convolution on the temporal axis, in the hope that the models will learn the hierarchical motion patters as in the image space. Several recent works have started to decouple the spatial and temporal convolution in 3D CNNs to achieve more explicit temporal modeling @cite_29 @cite_14 @cite_30 . In @cite_16 @cite_17 the temporal modeling is further improved by tracking feature points or body joints over time.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_7", "@cite_15", "@cite_28", "@cite_29", "@cite_1", "@cite_0", "@cite_40", "@cite_31", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2883429621", "2963155035", "2963315828", "", "", "2883534172", "", "1983364832", "", "2963524571", "2891446678", "2156303437", "" ], "abstract": [ "Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24).", "In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which produces CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101, and HMDB51.", "Deep convolutional networks have achieved great success for image recognition. However, for action recognition in videos, their advantage over traditional methods is not so evident. We present a general and flexible video-level framework for learning action models in videos. This method, called temporal segment network (TSN), aims to model long-range temporal structures with a new segment-based sampling and aggregation module. This unique design enables our TSN to efficiently learn action models by using the whole action videos. The learned models could be easily adapted for action recognition in both trimmed and untrimmed videos with simple average pooling and multi-scale temporal window integration, respectively. We also study a series of good practices for the instantiation of TSN framework given limited training samples. Our approach obtains the state-the-of-art performance on four challenging action recognition benchmarks: HMDB51 (71.0 ), UCF101 (94.9 ), THUMOS14 (80.1 ), and ActivityNet v1.2 (89.6 ). Using the proposed RGB difference for motion models, our method can still achieve competitive accuracy on UCF101 (91.0 ) while running at 340 FPS. Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.", "", "", "In this paper, we aim to reduce the computational cost of spatio-temporal deep neural networks, making them run as fast as their 2D counterparts while preserving state-of-the-art accuracy on video recognition benchmarks. To this end, we present the novel Multi-Fiber architecture that slices a complex neural network into an ensemble of lightweight networks or fibers that run through the network. To facilitate information flow between fibers we further incorporate multiplexer modules and end up with an architecture that reduces the computational cost of 3D networks by an order of magnitude, while increasing recognition performance at the same time. Extensive experimental results show that our multi-fiber architecture significantly boosts the efficiency of existing convolution networks for both image and video recognition tasks, achieving state-of-the-art performance on UCF-101, HMDB-51 and Kinetics datasets. Our proposed model requires over 9 ( ) and 13 ( ) less computations than the I3D [1] and R(2+1)D [2] models, respectively, yet providing higher accuracy.", "", "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.", "How to leverage the temporal dimension is a key question in video analysis. Recent work suggests an efficient approach to video feature learning, namely, factorizing 3D convolutions into separate components respectively for spatial and temporal convolutions. The temporal convolution, however, comes with an implicit assumption – the feature maps across time steps are well aligned so that the features at the same locations can be aggregated. This assumption may be overly strong in practical applications, especially in action recognition where the motion of people or objects is a crucial aspect. In this work, we propose a new CNN architecture TrajectoryNet, which incorporates trajectory convolution, a new operation for integrating features along the temporal dimension, to replace the standard temporal convolution. This operation explicitly takes into account the location changes caused by deformation or motion, allowing the visual features to be aggregated along the the motion paths. On two very large-scale action recognition datasets, namely, Something-Something and Kinetics, the proposed network architecture achieves notable improvement over strong baselines.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "" ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Most of these methods treat action recognition as a video classification problem. These works tend to focus on how motion is captured by the networks and largely ignore what makes the actions unique. In this work, we provide insights specific to the nature of the action recognition problem itself, showing how it requires an increased sensitivity to finer details. Different from the methods above, our work is explicitly designed for fine-grained action classification. In particular, the proposed approach is inspired by recent advances in the fine-grained recognition literature, such as @cite_33 . We hope that this work will help draw the attention of the community on understanding generic action classes as a fine-grained recognition problem.
{ "cite_N": [ "@cite_33" ], "mid": [ "2798365843" ], "abstract": [ "Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for fine-grained recognition essentially enhance the mid-level learning capability of CNNs. Previous approaches achieve this by introducing an auxiliary network to infuse localization information into the main classification network, or a sophisticated feature encoding method to capture higher order feature statistics. We show that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations. Such a filter bank is well structured, properly initialized and discriminatively learned through a novel asymmetric multi-stream architecture with convolutional filter supervision and a non-random layer initialization. Experimental results show that our approach achieves state-of-the-art on three publicly available fine-grained recognition datasets (CUB-200-2011, Stanford Cars and FGVC-Aircraft). Ablation studies and visualizations are provided to understand our approach." ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Understanding human activities as a fine-grained recognition problem has been explored for some domain specific tasks @cite_6 @cite_37 . For example, some works have been proposed for hand-gesture recognition @cite_18 @cite_38 @cite_32 , daily life activity recognition @cite_39 and sports understanding @cite_26 @cite_2 @cite_8 @cite_4 . All these works build ad hoc solutions specific to the action domain they are addressing. Instead, we present a solution for generic action recognition and show that this can also be treated as a fine-grained recognition problem and that it can benefit from learning fine-grained information.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_18", "@cite_26", "@cite_4", "@cite_8", "@cite_32", "@cite_6", "@cite_39", "@cite_2" ], "mid": [ "2491875666", "", "1926645898", "2138105460", "2149335206", "2100646116", "2156798932", "2150338977", "2019660985", "2737024909" ], "abstract": [ "Joint segmentation and classification of fine-grained actions is important for applications of human-robot interaction, video surveillance, and human skill evaluation. However, despite substantial recent progress in large-scale action classification, the performance of state-of-the-art fine-grained action recognition approaches remains low. We propose a model for action segmentation which combines low-level spatiotemporal features with a high-level segmental classifier. Our spatiotemporal CNN is comprised of a spatial component that represents relationships between objects and a temporal component that uses large 1D convolutional filters to capture how object relationships change across time. These features are used in tandem with a semi-Markov model that captures transitions from one action to another. We introduce an efficient constrained segmental inference algorithm for this model that is orders of magnitude faster than the current approach. We highlight the effectiveness of our Segmental Spatiotemporal CNN on cooking and surgical action datasets for which we observe substantially improved performance relative to recent baseline methods.", "", "In this paper we present a method to capture video-wide temporal information for action recognition. We postulate that a function capable of ordering the frames of a video temporally (based on the appearance) captures well the evolution of the appearance within the video. We learn such ranking functions per video via a ranking machine and use the parameters of these as a new video representation. The proposed method is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We perform a large number of evaluations on datasets for generic action recognition (Hollywood2 and HMDB51), fine-grained actions (MPII- cooking activities) and gestures (Chalearn). Results show that the proposed method brings an absolute improvement of 7–10 , while being compatible with and complementary to further improvements in appearance and local motion based methods.", "Our goal is to recognize human action at a distance, at resolutions where a whole person may be, say, 30 pixels tall. We introduce a novel motion descriptor based on optical flow measurements in a spatiotemporal volume for each stabilized human figure, and an associated similarity measure to be used in a nearest-neighbor framework. Making use of noisy optical flow measurements is the key challenge, which is addressed by treating optical flow not as precise pixel displacements, but rather as a spatial pattern of noisy measurements which are carefully smoothed and aggregated to form our spatiotemporal motion descriptor. To classify the action being performed by a human figure in a query sequence, we retrieve nearest neighbor(s) from a database of stored, annotated video sequences. We can also use these retrieved exemplars to transfer 2D 3D skeletons onto the figures in the query sequence, as well as two forms of data-based action synthesis \"do as I do\" and \"do as I say\". Results are demonstrated on ballet, tennis as well as football datasets.", "In this paper we report on our experience in the detection and recognition of soccer highlights in videos using hidden Markov models. A first approach relies on camera motion only, whereas a second one also includes information regarding the location of players on the playing field. While the former approach requires less information, the latter has proven to be more precise. Our experimental evaluation yields interesting results.", "In baseball games, different release points of pitchers form several kinds of pitching styles. Different pitching styles possess individual advantages. This paper presents a novel pitching style recognition approach for automatic generation of game information and video annotation. First, an effective object segmentation algorithm is designed to compute the body contour and extract the pitcher's body. Then, star skeleton is used as the representative descriptor of the pitcher posture for pitching style recognition. The proposed approach has been tested on broadcast baseball video and the promising experimental results validate the robustness and practicability.", "Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.", "In this paper we present results related to achieving finegrained activity recognition for context-aware computing applications. We examine the advantages and challenges of reasoning with globally unique object instances detected by an RFID glove. We present a sequence of increasingly powerful probabilistic graphical models for activity recognition. We show the advantages of adding additional complexity and conclude with a model that can reason tractably about aggregated object instances and gracefully generalizes from object instances to their classes by using abstraction smoothing. We apply these models to data collected from a morning household routine.", "While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition.", "We present a hierarchical recurrent network for understanding team sports activity in image and location sequences. In the hierarchical model, we integrate proposed multiple person-centered features over a temporal sequence based on LSTM's outputs. To achieve this scheme, we introduce the Keeping state in LSTM as one of externally controllable states, and extend the Hierarchical LSTMs to include mechanism for the integration. Experimental results demonstrate effectiveness of the proposed framework involving hierarchical LSTM and person-centered feature. In this study, we demonstrate improvement over the reference model. Specifically, by incorporating the person-centered feature with meta-information (e.g., location data) in our proposed late fusion framework, we also demonstrate increased discriminability of action categories and enhanced robustness against fluctuation in the number of observed players." ] }
1908.07625
2969552498
Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.
Different from common object categories such as those in ImageNet @cite_21 , this field cares for objects that look visually very similar and that can only be differentiated by learning their finer details. Some examples including distinguishing bird @cite_5 and plant @cite_9 species and recognizing different car models @cite_35 @cite_23 . We refer to Zhao et. al @cite_10 for an interesting and complete survey on this topic. Here we just remark the importance of learning the visual details of these fine-grained classes. Some works achieve this with various pooling techniques @cite_36 @cite_19 ; some use part-based approaches @cite_3 @cite_11 ; and others use attention mechanisms @cite_20 @cite_41 . More recently, Wang et. al @cite_33 proposed to use a set of @math convolution layers as a discriminative filter bank and use a spatial max pooling to find the location of the fine-grained information. Our method takes inspiration from this method and extends it to the task of video action recognition. We emphasize the importance of finding the fine-grained details in the spatio-temporal domain with high resolutions features.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_36", "@cite_41", "@cite_9", "@cite_21", "@cite_3", "@cite_19", "@cite_23", "@cite_5", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2138011018", "2798365843", "2729172879", "2773003563", "2736618577", "2108598243", "56385144", "2551829638", "1958236864", "", "2579318141", "2460852148", "2462457117" ], "abstract": [ "While 3D object representations are being revived in the context of multi-view object class detection and scene understanding, they have not yet attained wide-spread use in fine-grained categorization. State-of-the-art approaches achieve remarkable performance when training data is plentiful, but they are typically tied to flat, 2D representations that model objects as a collection of unconnected views, limiting their ability to generalize across viewpoints. In this paper, we therefore lift two state-of-the-art 2D object representations to 3D, on the level of both local feature appearance and location. In extensive experiments on existing and newly proposed datasets, we show our 3D object representations outperform their state-of-the-art 2D counterparts for fine-grained categorization and demonstrate their efficacy for estimating 3D geometry from images via ultra-wide baseline matching and 3D reconstruction.", "Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for fine-grained recognition essentially enhance the mid-level learning capability of CNNs. Previous approaches achieve this by introducing an auxiliary network to infuse localization information into the main classification network, or a sophisticated feature encoding method to capture higher order feature statistics. We show that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations. Such a filter bank is well structured, properly initialized and discriminatively learned through a novel asymmetric multi-stream architecture with convolutional filter supervision and a non-random layer initialization. Experimental results show that our approach achieves state-of-the-art on three publicly available fine-grained recognition datasets (CUB-200-2011, Stanford Cars and FGVC-Aircraft). Ablation studies and visualizations are provided to understand our approach.", "We present a simple and effective architecture for fine-grained visual recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs belong to the class of orderless texture representations but unlike prior work they can be trained in an end-to-end manner. Our most accurate model obtains 84.1 , 79.4 , 86.9 and 91.3 per-image accuracy on the Caltech-UCSD birds [67], NABirds [64], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at this http URL.", "Recognizing fine-grained categories (e.g., bird species) highly relies on discriminative part localization and part-based fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that part localization (e.g., head of a bird) and fine-grained feature learning (e.g., head shape) are mutually correlated. In this paper, we propose a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other. MA-CNN consists of convolution, channel grouping and part classification sub-networks. The channel grouping network takes as input feature channels from convolutional layers, and generates multiple parts by clustering, weighting and pooling from spatially-correlated channels. The part classification network further classifies an image by each individual part, through which more discriminative fine-grained features can be learned. Two losses are proposed to guide the multi-task learning of channel grouping and part classification, which encourages MA-CNN to generate more discriminative parts from feature channels and learn better fine-grained features from parts in a mutual reinforced way. MA-CNN does not need bounding box part annotation and can be trained end-to-end. We incorporate the learned parts from MA-CNN with part-CNN for recognition, and show the best performances on three challenging published fine-grained datasets, e.g., CUB-Birds, FGVC-Aircraft and Stanford-Cars.", "Existing image classification datasets used in computer vision tend to have an even number of images for each object category. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist Challenge 2017 dataset - an image classification benchmark consisting of 675,000 images with over 5,000 different species of plants and animals. It features many visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, have been verified by multiple citizen scientists, and feature a large class imbalance. We discuss the collection of the dataset and present baseline results for state-of-the-art computer vision classification models. Results show that current non-ensemble based methods achieve only 64 top one classification accuracy, illustrating the difficulty of the dataset. Finally, we report results from a competition that was held with the data.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the Caltech-UCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.", "Although Deep Convolutional Neural Networks (CNNs) have liberated their power in various computer vision tasks, the most important components of CNN, convolutional layers and fully connected layers, are still limited to linear transformations. In this paper, we propose a novel Factorized Bilinear (FB) layer to model the pairwise feature interactions by considering the quadratic terms in the transformations. Compared with existing methods that tried to incorporate complex non-linearity structures into CNNs, the factorized parameterization makes our FB layer only require a linear increase of parameters and affordable computational cost. To further reduce the risk of overfitting of the FB layer, a specific remedy called DropFactor is devised during the training process. We also analyze the connection between FB layer and some existing models, and show FB layer is a generalization to them. Finally, we validate the effectiveness of FB layer on several widely adopted datasets including CIFAR-10, CIFAR-100 and ImageNet, and demonstrate superior results compared with various state-of-the-art deep models.", "This paper aims to highlight vision related tasks centered around “car”, which has been largely neglected by vision community in comparison to other objects. We show that there are still many interesting car-related problems and applications, which are not yet well explored and researched. To facilitate future car-related research, in this paper we present our on-going effort in collecting a large-scale dataset, “CompCars”, that covers not only different car views, but also their different internal and external parts, and rich attributes. Importantly, the dataset is constructed with a cross-modality nature, containing a surveillance-nature set and a web-nature set. We further demonstrate a few important applications exploiting the dataset, namely car model classification, car model verification, and attribute prediction. We also discuss specific challenges of the car-related problems and other potential applications that worth further investigations. The latest dataset can be downloaded at http: mmlab.ie.cuhk.edu.hk datasets comp_cars index.html", "", "The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively.", "Fine-grained object classification attracts increasing attention in multimedia applications. However, it is a quite challenging problem due to the subtle interclass difference and large intraclass variation. Recently, visual attention models have been applied to automatically localize the discriminative regions of an image for better capturing critical difference, which have demonstrated promising performance. Unfortunately, without consideration of the diversity in attention process, most of existing attention models perform poorly in classifying fine-grained objects. In this paper, we propose a diversified visual attention network (DVAN) to address the problem of fine-grained object classification, which substantially relieves the dependency on strongly supervised information for learning to localize discriminative regions com-pared with attention-less models. More importantly, DVAN explicitly pursues the diversity of attention and is able to gather discriminative information to the maximal extent. Multiple attention canvases are generated to extract convolutional features for attention. An LSTM recurrent unit is employed to learn the attentiveness and discrimination of attention canvases. The proposed DVAN has the ability to attend the object from coarse to fine granularity, and a dynamic internal representation for classification is built up by incrementally combining the information from different locations and scales of the image. Extensive experiments conducted on CUB-2011, Stanford Dogs, and Stanford Cars datasets have demonstrated that the pro-posed DVAN achieves competitive performance compared to the state-of-the-art approaches, without using any prior knowledge, user interaction, or external resource in training and testing.", "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40 vs the best previous precision of 74.00 for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14 accuracy on CUB-2011." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
The transfer learning literature explores a number of different topics and strategies such as few-shot learning @cite_19 @cite_29 , domain adaptation @cite_6 , weight synthesis @cite_2 , and multi-task learning @cite_33 @cite_24 @cite_8 . Some works propose novel combinations of these approaches, yielding new training architectures and optimization objectives to improve transfer performance under conditions of domain transfer with limited or incomplete annotations @cite_10 .
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_29", "@cite_6", "@cite_24", "@cite_19", "@cite_2", "@cite_10" ], "mid": [ "2152917747", "2763323349", "2950276680", "2052293776", "2517038008", "181016146", "2045416664", "2963049866" ], "abstract": [ "Creating labeled training data for relation extraction is expensive. In this paper, we study relation extraction in a special weakly-supervised setting when we have only a few seed instances of the target relation type we want to extract but we also have a large amount of labeled instances of other relation types. Observing that different relation types can share certain common structures, we propose to use a multi-task learning method coupled with human guidance to address this weakly-supervised relation extraction problem. The proposed framework models the commonality among different relation types through a shared weight vector, enables knowledge learned from the auxiliary relation types to be transferred to the target relation type, and allows easy control of the tradeoff between precision and recall. Empirical evaluation on the ACE 2004 data set shows that the proposed method substantially improves over two baseline methods.", "The machine learning approach provides a useful tool when the amount of data is very large and a model is not available to explain the generation and relation of the data set. The Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques provides a set of practical applications for solving problems and applying various techniques in automatic data extraction and setting. A defining collection of field advancements, this Handbook of Research fills the gap between theory and practice, providing a strong reference for academicians, researchers, and practitioners.", "This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.", "The transfer learning and domain adaptation problems originate from a distribution mismatch between the source and target data distribution. The causes of such mismatch are traditionally considered different. Thus, transfer learn- ing and domain adaptation algorithms are designed to ad- dress different issues, and cannot be used in both settings unless substantially modified. Still, one might argue that these problems are just different declinations of learning to learn, i.e. the ability to leverage over prior knowledge when attempting to solve a new task. We propose a learning to learn framework able to lever- age over source data regardless of the origin of the distri- bution mismatch. We consider prior models as experts, and use their output confidence value as features. We use them to build the new target model, combined with the features from the target data through a high-level cue integration scheme. This results in a class of algorithms usable in a plug-and-play fashion over any learning to learn scenario, from binary and multi-class transfer learning to single and multiple source domain adaptation settings. Experiments on several public datasets show that our approach consis- tently achieves the state of the art.", "", "Extracting the relations that exist between two entities is an important step in numerous Web-related tasks such as information extraction. A supervised relation extraction system that is trained to extract a particular relation type might not accurately extract a new type of a relation for which it has not been trained. However, it is costly to create training data manually for every new relation type that one might want to extract. We propose a method to adapt an existing relation extraction system to extract new relation types with minimum supervision. Our proposed method comprises two stages: learning a lower-dimensional projection between different relations, and learning a relational classifier for the target relation type with instance sampling. We evaluate the proposed method using a dataset that contains 2000 instances for 20 different relation types. Our experimental results show that the proposed method achieves a statistically significant macro-average F-score of 62.77. Moreover, the proposed method outperforms numerous baselines and a previously proposed weakly-supervised relation extraction method.", "Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this “transfer of learning” is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a “self-sensing” network state, and we compare and contrast this with compressed sensing.", "We propose a framework that learns a representation transferable across different domains and tasks in a data efficient manner. Our approach battles domain shift with a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach. Our model is simultaneously optimized on labeled source data and unlabeled or sparsely labeled data in the target domain. Our method shows compelling results on novel classes within a new domain even when only a few labeled examples per class are available, outperforming the prevalent fine-tuning approach. In addition, we demonstrate the effectiveness of our framework on the transfer learning task from image object recognition to video action recognition." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
Several approaches have been tried to transfer robust representations based on large numbers of examples to new tasks. These transfer learning approaches share a common intuition @cite_28 : that networks which have learned compact representations of a "source" task, can reuse these representations to achieve higher performance on a related "target" task. Different approaches use different techniques to transfer previous representations. approaches attempt to identify appropriate data used in the source task to supplement target task training, approaches attempt to leverage source task weight matrices, and approaches involve re-using the architecture or hyper-parameters of the source network @cite_32 @cite_18 . These approaches, often supplemented by related small-data techniques such as bootstrapping, can yield improvements in performance @cite_21 @cite_1 @cite_24 @cite_0 @cite_14 . One approach to transfer learning is to leverage existing deep nets trained on a large dataset, for example VGG16 DBLP:journals corr RazavianASC14 @cite_0 for images, or PCNN @cite_20 for relation prediction. The trained weights in these networks have captured a representation of the input that can be transferred by fine-tuning the weights or retraining the final dense layer of the network on the new task.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_21", "@cite_1", "@cite_32", "@cite_24", "@cite_0", "@cite_20" ], "mid": [ "2165698076", "2149933564", "2616180702", "2203224402", "2964352358", "2122838776", "2517038008", "2962835968", "2251135946" ], "abstract": [ "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P(x) is structurally related to some task of interest, say predicting P(y x). This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.", "", "Traditional machine learning makes a basic assumption: the training and test data should be under the same distribution. However, in many cases, this identical-distribution assumption does not hold. The assumption might be violated when a task from one new domain comes, while there are only labeled data from a similar old domain. Labeling the new data can be costly and it would also be a waste to throw away all the old data. In this paper, we present a novel transfer learning framework called TrAdaBoost, which extends boosting-based learning algorithms (Freund & Schapire, 1997). TrAdaBoost allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data. We show that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data, even when the new data are not sufficient to train a model alone. We show that TrAdaBoost allows knowledge to be effectively transferred from the old data to the new. The effectiveness of our algorithm is analyzed theoretically and empirically to show that our iterative algorithm can converge well to an accurate model.", "", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
While all these methods seek to improve performance on the target task by transfer from the source task, most assume there is only one source model, usually trained from ImageNet. @cite_15 Additionally, this approach involves a number of meta-learning decisions, although in general each change from the original source architecture tends to decrease resulting classification performance @cite_14 . Meta-learning @cite_5 is another approach for representation transfer. While meta-learning typically deals with training a base model on a variety of different learning tasks, transfer learning is about learning from multiple related learning tasks @cite_17 . Efficiency of transfer learning depends on the right source data selection, whereas meta-learning models could suffer from 'negative transfer' @cite_18 of knowledge if source and target domains are unrelated. Surprisingly, in image classification performance gains are commonly observed even in cases where initialization data appears visually and semantically different from the target dataset (such as ImageNet and Medical Imaging datasets).
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "2165698076", "2149933564", "2091118421", "2108598243", "2604763608" ], "abstract": [ "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "Met alearning attracted considerable interest in the machine learning community in the last years. Yet, some disagreement remains on what does or what does not constitute a met alearning problem and in which contexts the term is used in. This survey aims at giving an all-encompassing overview of the research directions pursued under the umbrella of met alearning, reconciling different definitions given in scientific literature, listing the choices involved when designing a met alearning system and identifying some of the future research challenges in this domain.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies." ] }
1908.07630
2969858281
Transfer learning enhances learning across tasks, by leveraging previously learned representations -- if they are properly chosen. We describe an efficient method to accurately estimate the appropriateness of a previously trained model for use in a new learning task. We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model. We validate our approach thoroughly, by assembling a collection of candidate source models, then fine-tuning each candidate to perform each of a collection of target tasks, and finally measuring how well transfer has been enhanced. Across 95 tasks within multiple domains (images classification and semantic relations), the P2L approach was able to select the best transfer learning model on average, while the heuristic of choosing model trained with the largest data set selected the best model in only 55 cases. These results suggest that P2L captures important information in common between source and target tasks, and that this shared informational structure contributes to successful transfer learning more than simple data size.
Our approach is most similar to that of fine-tuning with co-training @cite_14 . That method begins by using low-level features to identify images within a source dataset having similar textures to a target dataset, and concludes by using a multi-task objective to fine-tune the target task using these images. A related approach has been used to enhance performance and reduce training time in document classification @cite_25 and to identify examples to supplement training data @cite_16 @cite_14 . Our goal is to extend this approach to high-level features, and to domains outside computer vision to construct a more complete map of the feature space of a trained network. In this way our approach has some parallels with "learning to transfer" approaches @cite_34 , which attempt to train a source model optimized for transfer rather than target accuracy.
{ "cite_N": [ "@cite_34", "@cite_14", "@cite_25", "@cite_16" ], "mid": [ "2745420784", "2149933564", "2963336383", "2591924527" ], "abstract": [ "Transfer learning borrows knowledge from a source domain to facilitate learning in a target domain. Two primary issues to be addressed in transfer learning are what and how to transfer. For a pair of domains, adopting different transfer learning algorithms results in different knowledge transferred between them. To discover the optimal transfer learning algorithm that maximally improves the learning performance in the target domain, researchers have to exhaustively explore all existing transfer learning algorithms, which is computationally intractable. As a trade-off, a sub-optimal algorithm is selected, which requires considerable expertise in an ad-hoc way. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we first learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer for a newly arrived pair of domains by optimizing the reflection function. Extensive experiments demonstrate the L2T's superiority over several state-of-the-art transfer learning algorithms and its effectiveness on discovering more transferable knowledge.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "In this article, a region-based Deep Convolutional Neural Network framework is presented for document structure learning. The contribution of this work involves efficient training of region based classifiers and effective ensembling for document image classification. A primary level of ‘inter-domain’ transfer learning is used by exporting weights from a pre-trained VGG16 architecture on the ImageNet dataset to train a document classifier on whole document images. Exploiting the nature of region based influence modelling, a secondary level of ‘intra-domain’ transfer learning is used for rapid training of deep learning models for image segments. Finally, a stacked generalization based ensembling is utilized for combining the predictions of the base deep neural network models. The proposed method achieves state-of-the-art accuracy of 92.21 on the popular RVL-CDIP document image dataset, exceeding the benchmarks set by the existing algorithms.", "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2 - 10 using a single model. Codes and models are available at https: github.com ZYYSzj Selective-Joint-Fine-tuning." ] }
1908.07195
2969990170
Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.
As mentioned above, MLE suffers from the exposure bias problem @cite_3 @cite_33 . Thus, reinforcement learning has been introduced to text generation tasks such as policy gradient @cite_33 and actor-critic @cite_20 . @cite_37 proposed an efficient and stable approach called Reward Augmented Maximum Likelihood (RAML), which connects the log-likelihood and expected rewards to incorporate MLE training objective into RL framework.
{ "cite_N": [ "@cite_37", "@cite_20", "@cite_33", "@cite_3" ], "mid": [ "2963962369", "2964352247", "2963248296", "648786980" ], "abstract": [ "A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected reward objectives, we show that an optimal regularized expected reward is achieved when the conditional distribution of the outputs given the inputs is proportional to their exponentiated scaled rewards. Accordingly, we present a framework to smooth the predictive probability of the outputs using their corresponding rewards. We optimize the conditional log-probability of augmented outputs that are sampled proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RML), where the rewards are defined as the negative edit distance between the outputs and the ground truth labels.", "We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a textit critic network that is trained to predict the value of an output token, given the policy of an textit actor network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-specific score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.", "Abstract: Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.", "Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used succesfully in our winning entry to the MSCOCO image captioning challenge, 2015." ] }
1908.07195
2969990170
Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.
The most similar works to our model are RAML @cite_37 and MaliGAN @cite_1 : 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text generation tasks, particularly those without explicit evaluation metrics. 2) Unlike MaliGAN, we acquire samples from a fixed distribution near the real data rather than the generator's distribution, which is expected to make the training process more stable.
{ "cite_N": [ "@cite_37", "@cite_1" ], "mid": [ "2963962369", "2593383075" ], "abstract": [ "A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected reward objectives, we show that an optimal regularized expected reward is achieved when the conditional distribution of the outputs given the inputs is proportional to their exponentiated scaled rewards. Accordingly, we present a framework to smooth the predictive probability of the outputs using their corresponding rewards. We optimize the conditional log-probability of augmented outputs that are sampled proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RML), where the rewards are defined as the negative edit distance between the outputs and the ground truth labels.", "Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. The fundamental reason is the difficulty of back-propagation through discrete random variables combined with the inherent instability of the GAN training objective. To address these problems, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks. Instead of directly optimizing the GAN objective, we derive a novel and low-variance objective using the discriminator's output that follows corresponds to the log-likelihood. Compared with the original, the new objective is proved to be consistent in theory and beneficial in practice. The experimental results on various discrete datasets demonstrate the effectiveness of the proposed approach." ] }
1908.07325
2969224942
Recognizing multiple labels of images is a practical and challenging task, and significant progress has been made by searching semantic-aware regions and modeling label dependency. However, current methods cannot locate the semantic regions accurately due to the lack of part-level supervision or semantic guidance. Moreover, they cannot fully explore the mutual interactions among the semantic regions and do not explicitly model the label co-occurrence. To address these issues, we propose a Semantic-Specific Graph Representation Learning (SSGRL) framework that consists of two crucial modules: 1) a semantic decoupling module that incorporates category semantics to guide learning semantic-specific representations and 2) a semantic interaction module that correlates these representations with a graph built on the statistical label co-occurrence and explores their interactions via a graph propagation mechanism. Extensive experiments on public benchmarks show that our SSGRL framework outperforms current state-of-the-art methods by a sizable margin, e.g. with an mAP improvement of 2.5 , 2.6 , 6.7 , and 3.1 on the PASCAL VOC 2007 & 2012, Microsoft-COCO and Visual Genome benchmarks, respectively. Our codes and models are available at this https URL.
Recent progress on multi-label image classification relies on the combination of object localization and deep learning techniques @cite_28 @cite_10 . Generally, they introduced object proposals @cite_22 that were assumed to contain all possible foreground objects in the image and aggregated features extracted from all these proposals to incorporate local information. Although these methods achieved notable performance improvement, the step of region candidate localization usually incurred redundant computation cost and prevented the model from end-to-end training with deep neural networks. @cite_14 further utilized a learning based region proposal network and integrated it with deep neural networks. Although this method could be jointly optimized, it required additional annotations of bounding boxes to train the proposal generation component. To solve this issue, some other works @cite_29 @cite_20 @cite_29 resorted to attention mechanism to locate the informative regions, and these methods could be trained with image level annotations in an end-to-end manner. For example, @cite_20 introduced spatial transformer to adaptively search semantic-aware regions and then aggregated features from these regions to identify multiple labels. However, due to the lack of supervision and guidance, these methods could merely locate the regions roughly.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_29", "@cite_10", "@cite_20" ], "mid": [ "2560096627", "7746136", "1567302070", "2592628593", "2410641892", "2963300078" ], "abstract": [ "Deep convolution neural networks (CNNs) have demonstrated advanced performance on single-label image classification, and various progress also has been made to apply CNN methods on multilabel image classification, which requires annotating objects, attributes, scene categories, etc., in a single shot. Recent state-of-the-art approaches to the multilabel image classification exploit the label dependencies in an image, at the global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a regional latent semantic dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly dependent labels. The localized regions are further sent to the recurrent neural networks to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurring in the images. Also, we set up an upper bound model (RLSD+ft-RPN) using bounding-box coordinates during training, and the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world.", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5 by HCP only and 93.2 after the fusion with our complementary result in [12] based on hand-crafted features on the VOC 2012 dataset.", "Multi-label image classification is a fundamental but challenging task in computer vision. Great progress has been achieved by exploiting semantic relations between labels in recent years. However, conventional approaches are unable to model the underlying spatial relations between labels in multi-label images, because spatial annotations of the labels are generally not provided. In this paper, we propose a unified deep neural network that exploits both semantic and spatial relations between labels with only image-level supervisions. Given a multi-label image, our proposed Spatial Regularization Network (SRN) generates attention maps for all labels and captures the underlying relations between them via learnable convolutions. By aggregating the regularized classification results with original results by a ResNet-101 network, the classification performance can be consistently improved. The whole deep neural network is trained end-to-end with only image-level annotations, thus requires no additional efforts on image annotations. Extensive evaluations on 3 public datasets with different types of labels show that our approach significantly outperforms state-of-the-arts and has strong generalization capability. Analysis of the learned SRN model demonstrates that it can effectively capture both semantic and spatial relations of labels for improving classification performance.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.", "This paper proposes a novel deep architecture to address multi-label image recognition, a fundamental and practical task towards general visual understanding. Current solutions for this task usually rely on an extra step of extracting hypothesis regions (i.e., region proposals), resulting in redundant computation and sub-optimal performance. In this work, we achieve the interpretable and contextualized multi-label image classification by developing a recurrent memorized-attention module. This module consists of two alternately performed components: i) a spatial transformer layer to locate attentional regions from the convolutional feature maps in a region-proposal-free way and ii) an LSTM (Long-Short Term Memory) sub-network to sequentially predict semantic labeling scores on the located regions while capturing the global dependencies of these regions. The LSTM also output the parameters for computing the spatial transformer. On large-scale benchmarks of multi-label image classification (e.g., MS-COCO and PASCAL VOC 07), our approach demonstrates superior performances over other existing state-of-the-arts in both accuracy and efficiency." ] }
1908.04441
2969162116
RGB-Thermal object tracking attempt to locate target object using complementary visual and thermal infrared data. Existing RGB-T trackers fuse different modalities by robust feature representation learning or adaptive modal weighting. However, how to integrate dual attention mechanism for visual tracking is still a subject that has not been studied yet. In this paper, we propose two visual attention mechanisms for robust RGB-T object tracking. Specifically, the local attention is implemented by exploiting the common visual attention of RGB and thermal data to train deep classifiers. We also introduce the global attention, which is a multi-modal target-driven attention estimation network. It can provide global proposals for the classifier together with local proposals extracted from previous tracking result. Extensive experiments on two RGB-T benchmark datasets validated the effectiveness of our proposed algorithm.
RGB-T tracking receives more and more attention in computer vision community with the popularity of thermal infrared sensors. Wu @cite_15 concatenate the image patches from RGB and thermal sources, and then sparsely represent each sample in the target template space for tracking. Modal weights are introduced for each source to represent the image quality, and combine with the sparse representation in Bayesian filtering framework to perform object tracking @cite_0 . Zhu @cite_17 propose a quality-aware feature aggregation network to fuse multi-layer deep feature and multimodal information adaptively for RGB-T tracking. Although these works also achieve good performance on RGB-T benchmarks, however, they still adopt local search strategy for target localization and seldom of them consider explore long-term visual attentions for their tracker.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_17" ], "mid": [ "2765667535", "2106944077", "2901716381" ], "abstract": [ "In this paper, we propose a novel graph model, called weighted sparse representation regularized graph, to learn a robust object representation using multispectral (RGB and thermal) data for visual tracking. In particular, the tracked object is represented with a graph with image patches as nodes. This graph is dynamically learned from two aspects. First, the graph affinity (i.e., graph structure and edge weights) that indicates the appearance compatibility of two neighboring nodes is optimized based on the weighted sparse representation, in which the modality weight is introduced to leverage RGB and thermal information adaptively. Second, each node weight that indicates how likely it belongs to the foreground is propagated from others along with graph affinity. The optimized patch weights are then imposed on the extracted RGB and thermal features, and the target object is finally located by adopting the structured SVM algorithm. Moreover, we also contribute a comprehensive dataset for RGB-T tracking purpose. Comparing with existing ones, the new dataset has the following advantages: 1) Its size is sufficiently large for large-scale performance evaluation (total frame number: 210K, maximum frames per video pair: 8K). 2) The alignment between RGB-T video pairs is highly accurate, which does not need pre- and post-processing. 3) The occlusion levels are annotated for analyzing the occlusion-sensitive performance of different methods. Extensive experiments on both public and newly created datasets demonstrate the effectiveness of the proposed tracker against several state-of-the-art tracking methods.", "Information from multiple heterogenous data sources (e.g. visible and infrared) or representations (e.g. intensity and edge) have become increasingly important in many video-based applications. Fusion of information from these sources is critical to improve the robustness of related visual information processing systems. In this paper we propose a data fusion approach via sparse representation with applications to robust visual tracking. Specifically, the image patches from different sources of each target candidate are concatenated into a one-dimensional vector that is then sparsely represented in the target template space. The template space representation, which naturally fuses information from different sources, brings several benefits to visual tracking. First, it inherits robustness to appearance contaminations from the previously proposed sparse trackers. Second, it provides a flexible framework that can easily integrate information from different data sources. Third, it can be used for handling various number of data sources, which is very useful for situations where the data inputs arrive at different frequencies. The sparsity in the representation is achieved by solving an l1-regularized least squares problem. The tracking result is then determined by finding the candidate with the smallest approximation error. To propagate the results over time, the sparse solution is combined with the Bayesian state inference framework using the particle filter algorithm. We conducted experiments on several real videos with heterogeneous information sources. The results show that the proposed approach can track the target more robustly than several state-of-the-art tracking algorithms.", "This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGB-T tracking). We propose a novel deep network architecture \"quality-aware Feature Aggregation Network (FANet)\" to achieve quality-aware aggregations of both hierarchical features and multimodal information for robust online RGB-T tracking. Unlike existing works that directly concatenate hierarchical deep features, our FANet learns the layer weights to adaptively aggregate them to handle the challenge of significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion within each modality. Moreover, we employ the operations of max pooling, interpolation upsampling and convolution to transform these hierarchical and multi-resolution features into a uniform space at the same resolution for more effective feature aggregation. In different modalities, we elaborately design a multimodal aggregation sub-network to integrate all modalities collaboratively based on the predicted reliability degrees. Extensive experiments on large-scale benchmark datasets demonstrate that our FANet significantly outperforms other state-of-the-art RGB-T tracking methods." ] }
1908.04441
2969162116
RGB-Thermal object tracking attempt to locate target object using complementary visual and thermal infrared data. Existing RGB-T trackers fuse different modalities by robust feature representation learning or adaptive modal weighting. However, how to integrate dual attention mechanism for visual tracking is still a subject that has not been studied yet. In this paper, we propose two visual attention mechanisms for robust RGB-T object tracking. Specifically, the local attention is implemented by exploiting the common visual attention of RGB and thermal data to train deep classifiers. We also introduce the global attention, which is a multi-modal target-driven attention estimation network. It can provide global proposals for the classifier together with local proposals extracted from previous tracking result. Extensive experiments on two RGB-T benchmark datasets validated the effectiveness of our proposed algorithm.
Attention mechanism originates from the study of human congnitive neuroscience @cite_9 . In visual tracking, the cosine window map @cite_8 and Gaussian window map @cite_3 are widely used in DCF tracker to suppress the boundary effect, which can interpreted as one type of visual spatial attention. For short-time tracking, DAVT @cite_5 used a discriminative spatial attention and ACFN @cite_6 developed an attentional mechanism that chose a subset of the associated correlation filters for visual tracking. Recently, RASNet @cite_13 integrates the spatial attention, channel attention and residual attention to achieve the state-of-the-art tracking accuracy. Different from these works, we propose a novel local and global attention for robust RGB-T tracking.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_6", "@cite_3", "@cite_5", "@cite_13" ], "mid": [ "1964846093", "2115441154", "2610871254", "1955741794", "1482209366", "2797812763" ], "abstract": [ "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "We present a biologically plausible model of an attentional mechanism for forming position- and scale-invariant representations of objects in the visual world. The model relies on a set of control neurons to dynamically modify the synaptic strengths of intracortical connections so that information from a windowed region of primary visual cortex (V1) is selectively routed to higher cortical areas. Local spatial relationships (i.e., topography) within the attentional window are preserved as information is routed through the cortex. This enables attended objects to be represented in higher cortical areas within an object-centered reference frame that is position and scale invariant. We hypothesize that the pulvinar may provide the control signals for routing information through the cortex. The dynamics of the control neurons are governed by simple differential equations that could be realized by neurobiologically plausible circuits. In preattentive mode, the control neurons receive their input from a low-level “saliency map” representing potentially interesting regions of a scene. During the pattern recognition phase, control neurons are driven by the interaction between top-down (memory) and bottom-up (retinal input) sources. The model respects key neurophysiological, neuroanatomical, and psychophysical data relating to attention, and it makes a variety of experimentally testable predictions.", "We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.", "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0 and 8.2 respectively, in mean overlap precision, compared to the best existing trackers.", "A major reason leading to tracking failure is the spatial distractions that exhibit similar visual appearances as the target, because they also generate good matches to the target and thus distract the tracker. It is in general very difficult to handle this situation. In a selective attention tracking paradigm, this paper advocates a new approach of discriminative spatial attention that identifies some special regions on the target, called attentional regions (ARs). The ARs show strong discriminative power in their discriminative domains where they do not observe similar things. This paper presents an efficient two-stage method that divides the discriminative domain into a local and a semi-local one. In the local domain, the visual appearance of an attentional region is locally linearized and its discriminative power is closely related to the property of the associated linear manifold, so that a gradient-based search is designed to locate the set of local ARs. Based on that, the set of semi-local ARs are identified through an efficient branch-and-bound procedure that guarantees the optimality. Extensive experiments show that such discriminative spatial attention leads to superior performances in many challenging target tracking tasks.", "Offline training for object tracking has recently shown great potentials in balancing tracking accuracy and speed. However, it is still difficult to adapt an offline trained model to a target tracked online. This work presents a Residual Attentional Siamese Network (RASNet) for high performance object tracking. The RASNet model reformulates the correlation filter within a Siamese tracking framework, and introduces different kinds of the attention mechanisms to adapt the model without updating the model online. In particular, by exploiting the offline trained general attention, the target adapted residual attention, and the channel favored feature attention, the RASNet not only mitigates the over-fitting problem in deep network training, but also enhances its discriminative capacity and adaptability due to the separation of representation learning and discriminator learning. The proposed deep architecture is trained from end to end and takes full advantage of the rich spatial temporal information to achieve robust visual tracking. Experimental results on two latest benchmarks, OTB-2015 and VOT2017, show that the RASNet tracker has the state-of-the-art tracking accuracy while runs at more than 80 frames per second." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Input modification methods observe the changes in the outputs based on changes to the inputs. This can be done using perturbed variants @cite_0 @cite_40 , masks @cite_36 , or noise @cite_1 . For instance, Zeiler and Fergus @cite_36 create heatmaps based on the drop in prediction probability from input images with masked patches at different places. The problem with these methods is that they require exhaustive input modifications and can be computationally costly.
{ "cite_N": [ "@cite_0", "@cite_40", "@cite_1", "@cite_36" ], "mid": [ "1019830208", "2198606573", "2963996492", "1849277567" ], "abstract": [ "The binding specificities of RNA- and DNA-binding proteins are determined from experimental data using a ‘deep learning’ approach.", "DeepSEA, a deep-learning algorithm trained on large-scale chromatin-profiling data, predicts chromatin effects from sequence alone, has single-nucleotide sensitivity and can predict effects of noncoding variants.", "Abstract: With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Back propagation-based methods trace the contribution of the output backwards through the network to the input. In the classic example, DeconvNet @cite_36 @cite_35 back propagates the output through the network. However, instead of using the Rectified Linear Unit (ReLU) from the forward pass, DeconvNet applies ReLU to the back propagated signal. Guided Back Propagation @cite_15 is similar to DeconvNet except it blocks the negative values from for the forward and the backwards signals.
{ "cite_N": [ "@cite_36", "@cite_35", "@cite_15" ], "mid": [ "1849277567", "2962851944", "2963382180" ], "abstract": [ "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "" ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Using the gradients of activation functions to back propagate relevance can sometimes lead to misleading contribution attributions due to discontinuous or vanishing gradients. In order to overcome this, instead of using the gradients to back propagate the relevance, Deep Learning Important FeaTures (DeepLIFT) @cite_21 uses the difference between the activations of reference inputs. SHapeley Additive exPlanation (SHAP) @cite_25 extends DeepLIFT to include Shapely approximations. In addition, another way to solve this problem is the use of Integrated Gradients @cite_26 @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_25", "@cite_26" ], "mid": [ "2964612343", "2605409611", "2962862931", "2594633041" ], "abstract": [ "", "The purported \"black box\" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http: goo.gl qKb7pL, code: http: goo.gl RM8jvH.", "Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and or better consistency with human intuition than previous approaches.", "We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
Gradient-Weighted CAM (Grad-CAM) @cite_24 is a generalization of CAM that can target any layer and introduces the gradient information to CAM. The problem with CAM-based methods is that they specifically target high-level layers. Therefore, hybrid methods, such as Guided Grad-CAM @cite_24 , combine the qualities of CAMs with pixel-wise methods, such as guided back propagation.
{ "cite_N": [ "@cite_24" ], "mid": [ "2962858109" ], "abstract": [ "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E." ] }
1908.04351
2969115771
Convolutional Neural Networks (CNN) have become state-of-the-art in the field of image classification. However, not everything is understood about their inner representations. This paper tackles the interpretability and explainability of the predictions of CNNs for multi-class classification problems. Specifically, we propose a novel visualization method of pixel-wise input attribution called Softmax-Gradient Layer-wise Relevance Propagation (SGLRP). The proposed model is a class discriminate extension to Deep Taylor Decomposition (DTD) using the gradient of softmax to back propagate the relevance of the output probability to the input image. Through qualitative and quantitative analysis, we demonstrate that SGLRP can successfully localize and attribute the regions on input images which contribute to a target object's classification. We show that the proposed method excels at discriminating the target objects class from the other possible objects in the images. We confirm that SGLRP performs better than existing Layer-wise Relevance Propagation (LRP) based methods and can help in the understanding of the decision process of CNNs.
There are also many other visualization and explanation methods for neural networks. For example, by observing the maximal activations of layers, it is possible to visualize contributing regions of feature maps @cite_36 @cite_7 or the use of attention visualization @cite_30 @cite_34 . In addition, the hidden layers of networks can be analyzed using lower-dimensional representations, such as dimensionality reduction @cite_8 , Relative Neighbor Graphs (RNG) @cite_28 , matrix factorization @cite_9 , and modular representation by community detection @cite_44 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_8", "@cite_36", "@cite_28", "@cite_9", "@cite_44", "@cite_34" ], "mid": [ "2963811535", "1825675169", "2512304460", "1849277567", "2787080997", "2792641098", "2592004655", "2913070379" ], "abstract": [ "", "Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.", "In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "In past OCR research, different OCR engines are used for different printing types, i.e., machine-printed characters, handwritten characters, and decorated fonts. A recent research, however, reveals that convolutional neural networks (CNN) can realize a universal OCR, which can deal with any printing types without pre-classification into individual types. In this paper, we analyze how CNN for universal OCR manage the different printing types. More specifically, we try to find where a handwritten character of a class and a machine-printed character of the same class are \"fused\" in CNN. For analysis, we use two different approaches. The first approach is statistical analysis for detecting the CNN units which are sensitive (or insensitive) to type difference. The second approach is network-based visualization of pattern distribution in each layer. Both analyses suggest the same trend that types are not fully fused in convolutional layers but the distributions of the same class from different types become closer in upper layers.", "", "Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood.In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1)Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network.", "Deep neural network models have recently draw lots of attention, as it consistently produce impressive results in many computer vision tasks such as image classification, object detection, etc. However, interpreting such model and show the reason why it performs quite well becomes a challenging question. In this paper, we propose a novel method to interpret the neural network models with attention mechanism. Inspired by the heatmap visualization, we analyze the relation between classification accuracy with the attention based heatmap. An improved attention based method is also included and illustrate that a better classifier can be interpreted by the attention based heatmap." ] }
1908.06868
2969140094
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
When it comes to graphs, a natural approach to define a latent representation is a structure-based approach. It consists in defining the notion of frequency on a graph by analogy with the Fourier Transform (FT) @cite_2 . The so-called Graph Fourier Transform (GFT) represents the signal in another basis. This basis is built according to the graph structure without looking at the data at all. The eigenvectors of the Laplacian matrix of the graph are used as a new basis. These eigenvectors are ordered by increasing eigenvalues. Small eigenvalues correspond intuitively to low frequencies, and large eigenvalues to high frequencies. Consequently, to reduce the data dimension, only some of the first eigenvectors are kept. Data is back projected using only those first eigenvectors, and a low-dimensional representation of the signal is obtained. However, the analogy with the FT is not complete, as for instance, the GFT using a 2-dimensional grid does not match the 2D classical FT @cite_6 .
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2963660991", "2101491865" ], "abstract": [ "We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation defined from inferred graph translations. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.", "In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogs to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions." ] }
1908.06868
2969140094
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
Finding the best graph structure for a given problem is a question that recently sparked a lot of interest in the literature @cite_7 @cite_11 @cite_12 . In many cases, a good solution consists in using the covariance matrix (or its inverse) as the adjacency matrix of the graph. Following this lead, in this paper, we make use of a semi-geometric graph. It combines the geometry of images captured through a regular grid graph with the covariance of the pixels measured on different frames. As such, the support of the graph remains the same as that of the grid graph, but the weights are replaced by the covariance between corresponding pixels. Therefore, this graph takes into account both the data structure and the data distribution.
{ "cite_N": [ "@cite_12", "@cite_7", "@cite_11" ], "mid": [ "2299462150", "", "2964012239" ], "abstract": [ "Stationarity is a cornerstone property that facilitates the analysis and processing of random signals in the time domain. Although time-varying signals are abundant in nature, in many practical scenarios, the information of interest resides in more irregular graph domains. This lack of regularity hampers the generalization of the classical notion of stationarity to graph signals. This paper proposes a definition of weak stationarity for random graph signals that takes into account the structure of the graph where the random process takes place, while inheriting many of the meaningful properties of the classical time domain definition. Provided that the topology of the graph can be described by a normal matrix, stationary graph processes can be modeled as the output of a linear graph filter applied to a white input. This is shown equivalent to requiring the correlation matrix to be diagonalized by the graph Fourier transform; a fact that is leveraged to define a notion of power spectral density (PSD). Properties of the graph PSD are analyzed and a number of methods for its estimation are proposed. This includes generalizations of nonparametric approaches such as periodograms, window-based average periodograms, and filter banks, as well as parametric approaches, using moving-average, autoregressive, and ARMA processes. Graph stationarity and graph PSD estimation are investigated numerically for synthetic and real-world graph signals.", "", "The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior." ] }
1908.06868
2969140094
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
In this work, much of the interest in the latent representation is motivated by its use for sequence time prediction. There are, again, several recent works in this area, based on diverse methods (dictionary learning @cite_4 , source separation @cite_3 ). Again, we restrict the study to a simple question: what is the best linear, structure-based or data-driven, representation for predicting future frames using a LSTM?
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2803751491", "2888432608" ], "abstract": [ "Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control. While alpha waves (8-12 Hz) are known to closely resemble short sinusoids, and thus are revealed by Fourier analysis or wavelet transforms, there is an evolving debate that electromagnetic neural signals are composed of more complex waveforms that cannot be analyzed by linear filters and traditional signal representations. In this paper, we propose to learn dedicated representations of such recordings using a multivariate convolutional sparse coding (CSC) algorithm. Applied to electroencephalography (EEG) or magnetoencephalography (MEG) data, this method is able to learn not only prototypical temporal waveforms, but also associated spatial patterns so their origin can be localized in the brain. Our algorithm is based on alternated minimization and a greedy coordinate descent solver that leads to state-of-the-art running time on long time series. To demonstrate the implications of this method, we apply it to MEG data and show that it is able to recover biological artifacts. More remarkably, our approach also reveals the presence of non-sinusoidal mu-shaped patterns, along with their topographic maps related to the somatosensory cortex.", "We introduce a novel recurrent neural network (RNN) approach to account for temporal dynamics and dependencies in brain networks observed via functional magnetic resonance imaging (fMRI). Our approach directly parameterizes temporal dynamics through recurrent connections, which can be used to formulate blind source separation with a conditional (rather than marginal) independence assumption, which we call RNN-ICA. This formulation enables us to visualize the temporal dynamics of both first order (activity) and second order (directed connectivity) information in brain networks that are widely studied in a static sense, but not well-characterized dynamically. RNN-ICA predicts dynamics directly from the recurrent states of the RNN in both task and resting state fMRI. Our results show both task-related and group-differentiating directed connectivity." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Prior research shows that the presence of thumbnails in search results help people locate articles of interest online @cite_19 @cite_21 @cite_8 , particularly when paired with informative titles, text snippets, and URLs. Thumbnails are a particularly important signal of relevance when some links to articles have them and others do not. @cite_19 further showed that presenting thumbnails without accompanying text leads to worse search performance than simply presenting text without a thumbnail. Thumbnails also trigger behavior that would otherwise not occur in their absence. For instance, @cite_24 observed that when an email contains a link to a video file, more people click on the link when it is accompanied by a thumbnail.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_21", "@cite_8" ], "mid": [ "2035004345", "2147252560", "1984274118", "2139182464" ], "abstract": [ "In this Note we summarize our research on increasing the information scent of video recordings that are shared via email in a corporate setting. We compare two types of email messages for sharing recordings: the first containing basic information (e.g. title, speaker, abstract) with a link to the video; the second with the same information plus a set of video thumbnails (hyperlinked to the segments they represent), which are automatically created by video summarization technology. We report on the results of two user studies. The first one compares the quality of the set of thumbnails selected by the technology to sets selected by 31 humans. The second study examines the clickthrough rates for both email formats (with and without hyperlinked thumbnails) as well as gathering subjective feedback via survey. Results indicate that the email messages with the thumbnails drove significantly higher clickthrough rates than the messages without, even though people clicked on the main video link more frequently than the thumbnails. Survey responses show that users found the email with the thumbnail set significantly more appealing and novel.", "We investigated the efficacy of visual and textual web page previews in predicting the helpfulness of web pages related to a specific topic. We ran two studies in the usability lab and collected data through an online survey. Participants (total of 245) were asked to rate the expected helpfulness of a web page based on a preview (four different thumbnail variations: a textual web page summary, a thumbnail title URL combination, a title URL combination). In the lab studies, the same participants also rated the helpfulness of the actual web pages themselves. In the online study, the web page ratings were collected from a separate group of participants. Our results show that thumbnails add information about the relevance of web pages that is not available in the textual summaries of web pages (title, snippet & URL). However, showing only thumbnails, with no textual information, results in poorer performance than showing only textual summaries. The prediction inaccuracy caused by textual vs. visual previews was different: textual previews tended to make users overestimate the helpfulness of web pages, whereas thumbnails made users underestimate the helpfulness of web pages in most cases. In our study, the best performance was obtained by combining sufficiently large thumbnails (at least 200x200 pixels) with page titles and URLs - and it was better to make users focus primarily on the thumbnail by placing the title and URL below the thumbnail. Our studies highlighted four key aspects that affect the performance of previews: the visual textual mode of the previews, the zoom level and size of the thumbnail, as well as the positioning of key information elements.", "We describe an empirical evaluation of the utility of thumbnail previews in web search results. Results pages were constructed to show text-only summaries, thumbnail previews only, or the combination of text summaries and thumbnail previews. We found that in the combination case, users were able to make more accurate decisions about the potential relevance of results than in either of the other versions, with hardly any increase in speed of processing the page as a whole.", "We introduce a technique for creating novel, textually-enhanced thumbnails of Web pages. These thumbnails combine the advantages of image thumbnails and text summaries to provide consistent performance on a variety of tasks. We conducted a study in which participants used three different types of summaries (enhanced thumbnails, plain thumbnails, and text summaries) to search Web pages to find several different types of information. Participants took an average of 67, 86, and 95 seconds to find the answer with enhanced thumbnails, plain thumbnails, and text summaries, respectively. We found a strong effect of question category. For some questions, text outperformed plain thumbnails, while for other questions, plain thumbnails outperformed text. Enhanced thumbnails (which combine the features of text summaries and plain thumbnails) were more consistent than either text summaries or plain thumbnails, having for all categories the best performance or performance that was statistically indistinguishable from the best." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Beyond web search and email reading, thumbnails also appear in the context of navigating other forms of media, from file systems @cite_9 to documents @cite_13 and videos @cite_0 . Prior work in the visualization and visual analytics community has also incorporated thumbnails into the sensemaking process as a means of leveraging the unique advantages of spatial memory @cite_4 @cite_6 @cite_12 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_6", "@cite_0", "@cite_13", "@cite_12" ], "mid": [ "2537595510", "2069324208", "", "2161337144", "2164620969", "" ], "abstract": [ "Sensemaking is described as the process in which people collect, organize and create representations of information, all centered around some problem they need to understand. People often get lost when solving complicated tasks using big datasets over long periods of exploration and analysis. They may forget what they have done, are unaware of where they are in the context of the overall task, and are unsure where to continue. In this paper, we introduce a tool, SenseMap, to address these issues in the context of browser-based online sensemaking. We conducted a semi-structured interview with nine participants to explore their behaviors in online sensemaking with existing browser functionality. A simplified sensemaking model based on Pirolli and Card's model is derived to better represent the behaviors we found: users iteratively collect information sources relevant to the task, curate them in a way that makes sense, and finally communicate their findings to others. SenseMap automatically captures provenance of user sensemaking actions and provides multi-linked views to visualize the collected information and enable users to curate and communicate their findings. To explore how SenseMap is used, we conducted a user study in a naturalistic work setting with five participants completing the same sensemaking task related to their daily work activities. All participants found the visual representation and interaction of the tool intuitive to use. Three of them engaged with the tool and produced successful outcomes. It helped them to organize information sources, to quickly find and navigate to the sources they wanted, and to effectively communicate their findings.", "Effective management of documents on computers has been a central user interface problem for many years. One common approach involves using 2D spatial layouts of icons representing the documents, particularly for information workspace tasks. This approach takes advantage of human 2D spatial cognition. More recently, several 3D spatial layouts have engaged 3D spatial cognition capabilities. Some have attempted to use spatial memory in 3D virtual environments. However, there has been no proof to date that spatial memory works the same way in 3D virtual environments as it does in the real world. We describe a new technique for document management called the Data Mountain, which allows users to place documents at arbitrary positions on an inclined plane in a 3D desktop virtual environment using a simple 2D interaction technique. We discuss how the design evolved in response to user feedback. We also describe a user study that shows that the Data Mountain does take advantage of spatial memory. Our study shows that the Data Mountain has statistically reliable advantages over the Microsoft Internet Explorer Favorites mechanism for managing documents of interest in an information workspace.", "", "Online streaming video systems have become extremely popular, yet navigating to target scenes of interest can be a challenge. While recent techniques have been introduced to enable real-time seeking, they break down for large videos, where scrubbing the timeline causes video frames to skip and flash too quickly to be comprehendible. We present Swifter, a new video scrubbing technique that displays a grid of pre-cached thumbnails during scrubbing actions. In a series of studies, we first investigate possible design variations of the Swifter technique, and the impact of those variations on its performance. Guided by these results we compare an implementation of Swifter to the previously published Swift technique, in addition to the approaches utilized by YouTube and Netfilx. Our results show that Swifter significantly outperforms each of these techniques in a scene locating task, by a factor of up to 48 .", "Scrolling is the standard way to navigate through many types of digital documents. However, moving more than a few pages can be slow because all scrolling techniques constrain visual search to only a small document region. To improve document navigation, we developed Space-Filling Thumbnails (SFT), an overview display that eliminates most scrolling. SFT provides two views: a standard page view for reading, and a thumbnail view that shows all pages. We tested SFT in three experiments that involved finding pages in documents. The first study (n=13) compared seven current scrolling techniques, and showed that SFT is significantly faster than the other methods. The second and third studies (n=32 and n=14) were detailed comparisons of SFT with thumbnail-enhanced scrollbars (TES), which performed well in the first experiment. SFT was faster than TES across all document types and lengths, particularly when tasks involved revisitation. In addition, SFT was strongly preferred by participants.", "" ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
When browsing unfamiliar content, such as in the case of online news reading, thumbnails must compete for the reader's attention with one another and with other content. We therefore turn to other prior research examining specific aspects of thumbnail design. Several factors appear to impact how thumbnails draw attention and their ultimate utility, such as thumbnail size and the inclusion of text within the thumbnail. @cite_7 studied thumbnail size, concluding that the optimal thumbnail size depends on the task they are intended to support. For instance, they posit that a thumbnail should be larger than 96x96 pixels in order to trigger recognition among those revisiting a page containing multiple thumbnails.
{ "cite_N": [ "@cite_7" ], "mid": [ "1563293148" ], "abstract": [ "The selectable lists of pages offered by Web browsers’ history and bookmark facilities ostensibly make it easier for people to return to previously visited pages. These lists show the pages as abstractions, typically as truncated titles and URLs, and more rarely as small thumbnail images. Yet we have little knowledge of how recognisable these representations really are. Consequently, we carried out a study that compared the recognisability of thumbnails between various image sizes, and of titles and URLs between various string sizes. Our results quantify the trade-off between the size of these representations and their recognisability. These findings directly contribute to how history and bookmark lists should be designed." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Other prior research has considered how to automatically generate thumbnails using photos and images appearing in the article, such as by cropping, resizing, or selecting the most salient excerpts from them @cite_5 @cite_30 . More recently, @cite_37 introduced an algorithm to select highly salient and evocative thumbnails for videos. As this research was conducted with generic images, we see our work as a step toward the automatic generation of visualization thumbnails. This first requires a better understanding of current practices in thumbnail design.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_37" ], "mid": [ "2060502770", "2115498043", "2512435841" ], "abstract": [ "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.", "Current web search engines return result pages containing mostly text summary even though the matched web pages may contain informative pictures. A text excerpt (i.e. snippet) is generated by selecting keywords around the matched query terms for each returned page to provide context for user's relevance judgment. However, in many scenarios, we found that the pictures in web pages, if selected properly, could be added into search result pages and provide richer contextual description because a picture is worth a thousand words. Such new summary is named as image excerpts. By well designed user study, we demonstrate image excerpts can help users make much quicker relevance judgment of search results for a wide range of query types. To implement this idea, we propose a practicable approach to automatically generate image excerpts in the result pages by considering the dominance of each picture in each web page and the relevance of the picture to the query. We also outline an efficient way to incorporate image excerpts in web search engines. Web search engines can adopt our approach by slightly modifying their index and inserting a few low cost operations in their workflow. Our experiments on a large web dataset indicate the performance of the proposed approach is very promising.", "Thumbnails play such an important role in online videos. As the most representative snapshot, they capture the essence of a video and provide the first impression to the viewers; ultimately, a great thumbnail makes a video more attractive to click and watch. We present an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content and superior visual aesthetic quality. Our system selects attractive thumbnails by analyzing various visual quality and aesthetic metrics of video frames, and performs a clustering analysis to determine the relevance to video content, thus making the resulting thumbnails more representative of the video. On the task of predicting thumbnails chosen by professional video editors, we demonstrate the effectiveness of our system against six baseline methods, using a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition, we study what makes a frame a good thumbnail by analyzing the statistical relationship between thumbnail frames and non-thumbnail frames in terms of various image quality features. Our study suggests that the selection of a good thumbnail is highly correlated with objective visual quality metrics, such as the frame texture and sharpness, implying the possibility of building an automatic thumbnail selection system based on visual aesthetics." ] }
1908.06922
2968896088
When people browse online news, small thumbnail images accompanying links to articles attract their attention and help them to decide which articles to read. As an increasing proportion of online news can be construed as data journalism, we have witnessed a corresponding increase in the incorporation of visualization in article thumbnails. However, there is little research to support alternative design choices for visualization thumbnails, which include resizing, cropping, simplifying, and embellishing charts appearing within the body of the associated article. We therefore sought to better understand these design choices and determine what makes a visualization thumbnail inviting and interpretable. This paper presents our findings from a survey of visualization thumbnails collected online and from conversations with data journalists and news graphics designers. Our study reveals that there exists an uncharted design space, one that is in need of further empirical study. Our work can thus be seen as a first step toward providing structured guidance on how to design thumbnails for data stories.
Visualization is increasingly prevalent in news media @cite_36 . In this context, the communicative intent of visualization often leads to different design choices than those used in the context of data analysis @cite_33 . As a result, we encounter substantial use of graphical and text-based annotation @cite_26 . We also see the embellishment of charts @cite_38 @cite_3 with human-recognizable objects @cite_10 and the inclusion of graphics not related to the data @cite_20 . These noticeable aesthetic design choices @cite_39 and additional layers of annotative and embellishment may contribute to positive first impressions with an information graphic @cite_17 , and there is evidence to suggest that they increase reader comprehension @cite_28 and memorability @cite_34 @cite_25 @cite_2 . Hullman and Adar @cite_31 argue that while some embellishments make information graphics more difficult to interpret, their judicious application may help readers comprehend and recall content in some cases.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_33", "@cite_36", "@cite_28", "@cite_3", "@cite_39", "@cite_2", "@cite_31", "@cite_34", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2120326355", "2596412070", "", "", "2068562326", "1532276603", "1581627361", "1952173784", "", "2168629730", "2156349886", "2054901814", "1898893341", "" ], "abstract": [ "Guidelines for designing information charts (such as bar charts) often state that the presentation should reduce or remove 'chart junk' - visual embellishments that are not essential to understanding the data. In contrast, some popular chart designers wrap the presented data in detailed and elaborate imagery, raising the questions of whether this imagery is really as detrimental to understanding as has been proposed, and whether the visual embellishment may have other benefits. To investigate these issues, we conducted an experiment that compared embellished charts with plain ones, and measured both interpretation accuracy and long-term recall. We found that people's accuracy in describing the embellished charts was no worse than for plain charts, and that their recall after a two-to-three-week gap was significantly better. Although we are cautious about recommending that all charts be produced in this style, our results question some of the premises of the minimalist approach to chart design.", "Annotation plays an important role in conveying key points in visual data-driven storytelling; it helps presenters explain and emphasize core messages and specific data. However, the visualization research community has a limited understanding of annotation and its role in data-driven storytelling, and existing charting software provides limited support for creating annotations. In this paper, we characterize a design space of chart annotations, one informed by a survey of 106 annotated charts published by six prominent news graphics desks. Using this design space, we designed and developed ChartAccent, a tool that allows people to quickly and easily augment charts via a palette of annotation interactions that generate manual and data-driven annotations. We also report on a study in which participants reproduced a series of annotated charts using ChartAccent, beginning with unadorned versions of the same charts. Finally, we discuss the lessons learned during the process of designing and evaluating ChartAccent, and suggest directions for future research.", "", "", "Reading a visualization can involve a number of tasks such as extracting, comparing or aggregating numerical values. Yet, most of the charts that are published in newspapers, reports, books, and on the Web only support a subset of these tasks. In this paper we introduce graphical overlays-visual elements that are layered onto charts to facilitate a larger set of chart reading tasks. These overlays directly support the lower-level perceptual and cognitive processes that viewers must perform to read a chart. We identify five main types of overlays that support these processes; the overlays can provide (1) reference structures such as gridlines, (2) highlights such as outlines around important marks, (3) redundant encodings such as numerical data labels, (4) summary statistics such as the mean or max and (5) annotations such as descriptive text for context. We then present an automated system that applies user-chosen graphical overlays to existing chart bitmaps. Our approach is based on the insight that generating most of these graphical overlays only requires knowing the properties of the visual marks and axes that encode the data, but does not require access to the underlying data values. Thus, our system analyzes the chart bitmap to extract only the properties necessary to generate the desired overlay. We also discuss techniques for generating interactive overlays that provide additional controls to viewers. We demonstrate several examples of each overlay type for bar, pie and line charts.", "As data visualization becomes further intertwined with the field of graphic design and information graphics, small graphical alterations are made to many common chart formats. Despite the growing prevalence of these embellishments, their effects on communication of the charts' data is unknown. From an overview of the design space, we have outlined some of the common embellishments that are made to bar charts. We have studied the effects of these chart embellishments on the communication of the charts' data through a series of user studies on Amazon's Mechanical Turk platform. The results of these studies lead to a better understanding of how each chart type is perceived, and help provide guiding principles for the graphic design of charts.", "This paper reports on a between-subject, comparative online study of three information visualization demonstrators that each displayed the same dataset by way of an identical scatterplot technique, yet were different in style in terms of visual and interactive embellishment. We validated stylistic adherence and integrity through a separate experiment in which a small cohort of participants assigned our three demonstrators to predefined groups of stylistic examples, after which they described the styles with their own words. From the online study, we discovered significant differences in how participants execute specific interaction operations, and the types of insights that followed from them. However, in spite of significant differences in apparent usability, enjoyability and usefulness between the style demonstrators, no variation was found on the self-reported depth, expert-rated depth, confidence or difficulty of the resulting insights. Three different methods of insight analysis have been applied, revealing how style impacts the creation of insights, ranging from higher-level pattern seeking to a more reflective and interpretative engagement with content, which is what underlies the patterns. As this study only forms the first step in determining how the impact of style in information visualization could be best evaluated, we propose several guidelines and tips on how to gather, compare and categorize insights through an online evaluation study, particularly in terms of analyzing the concise, yet wide variety of insights and observations in a trustworthy and reproducable manner.", "In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.", "", "In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces “divided attention”, and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.", "Although the infographic and design communities have used simple pictographic representations for decades, it is still unclear whether they can make visualizations more effective. Using simple charts, we tested how pictographic representations impact (1) memory for information just viewed, as well as under the load of additional information, (2) speed of finding information, and (3) engagement and preference in seeking out these visualizations. We find that superfluous images can distract. But we find no user costs -- and some intriguing benefits -- when pictographs are used to represent the data.", "An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.", "While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88 of the infographics and 71 of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.", "" ] }
1908.07106
2969508838
A generalized @math puzzle' consists of an @math numbered grid, with one missing number. A move in the game switches the position of the empty square with the position of one of its neighbors. We solve Diaconis' 15 puzzle problem' by proving that the asymptotic total variation mixing time of the board is at least order @math when the board is given periodic boundary conditions and when random moves are made. We demonstrate that for any @math with @math , the number of fixed points after @math moves converges to a Poisson distribution of parameter 1. The order of total variation mixing time for this convergence is @math without cut-off. We also prove an upper bound of order @math for the total variation mixing time.
Much of the prior work regarding @math puzzles has focused on sorting strategies for a puzzle in general position, and on the hardness of finding shortest sorting algorithms. For instance, it is known that any @math puzzle may be returned to sorted order in order @math steps (and fewer are not always possible) @cite_14 , and that finding the shortest solution is NP-hard @cite_6 , @cite_13 .
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_6" ], "mid": [ "", "196900508", "2149951504" ], "abstract": [ "", "Following Wilson (J. Comb. Th. (B), 1975), Johnson (J. of Alg., 1983), and Kornhauser, Miller and Spirakis (25th FOCS, 1984), we consider a game that consists of moving distinct pebbles along the edges of an undirected graph. At most one pebble may reside in each vertex at any time, and it is only allowed to move one pebble at a time (which means that the pebble must be moved to a previously empty vertex). We show that the problem of finding the shortest sequence of moves between two given \"pebble configuations\" is NP-Hard.", "The 8-puzzle and the 15-puzzle have been used for many years as a domain for testing heuristic search techniques. From experience it is known that these puzzles are \"difficult\" and therefore useful for testing search techniques. In this paper we give strong evidence that these puzzles are indeed good test problems. We extend the 8-puzzle and the 15-puzzle to a nxn board and show that finding a shortest solution for the extended puzzle is NP-hard and thus computationally infeasible. We also present an approximation algorithm for transforming boards that is guaranteed to use no more than c L (SP) moves, where L (SP) is the length of the shortest solution and c is a constant which is independent of the given boards and their size n." ] }
1908.07106
2969508838
A generalized @math puzzle' consists of an @math numbered grid, with one missing number. A move in the game switches the position of the empty square with the position of one of its neighbors. We solve Diaconis' 15 puzzle problem' by proving that the asymptotic total variation mixing time of the board is at least order @math when the board is given periodic boundary conditions and when random moves are made. We demonstrate that for any @math with @math , the number of fixed points after @math moves converges to a Poisson distribution of parameter 1. The order of total variation mixing time for this convergence is @math without cut-off. We also prove an upper bound of order @math for the total variation mixing time.
A host of random walks on permutation groups have been studied via a variety of techniques, including the representation theory @cite_8 , @cite_15 , @cite_10 , and couplings @cite_11 . Our method for the upper bound, which uses a three transitive group action, is based on @cite_9 . So far as we are aware, the method of using renewal theory together with a local limit theorem to prove the lower bound, is new. The comparison techniques used in the proof of Theorem are partly based on @cite_17 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_17", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2056389572", "2963915337", "1976418263", "2963409322", "", "1922672875" ], "abstract": [ "", "Let g, h be a random pair of generators of G=Sym(n) or G=Alt(n). We show that, with probability tending to 1 as n→∞, (a) the diameter of G with respect to S= g,h,g−1,h−1 is at most O(n2(log⁡n)c), and (b) the mixing time of G with respect to S is at most O(n3(log⁡n)c). (Both c and the implied constants are absolute.) These bounds are far lower than the strongest worst-case bounds known (in Helfgott–Seress, 2013); they roughly match the worst known examples. We also give an improved, though still non-constant, bound on the spectral gap. Our results rest on a combination of the algorithm in (Babai–Beals–Seress, 2004) and the fact that the action of a pair of random permutations is almost certain to act as an expander on l-tuples, where l is an arbitrary constant (, 1998).", "By symmetry, P has eigenvalues 1 = I03 > I381 > ?> I 31xI- 1 2 -1. This paper develops methods for getting upper and lower bounds on 8i3 by comparison with a second reversible chain on the same state space. This extends the ideas introduced in Diaconis and Saloff-Coste (1993), where random walks on finite groups were considered. The bounds involve geometric properties such as the diameter and covering number of an associated graph along the lines of Diaconis and Stroock (1991). The main application gives a sharp upper bound on the second eigenvalue of the symmetric exclusion process. Thus, let S0 be a connected undirected graph with n vertices. For simplicity, we assume in this introduction that SW is regular. To start, r unlabelled particles are placed in an initial configuration, 1 < r < n. At each step, a particle is chosen at random; then one of the neighboring sites of this particle is chosen at random. If the neighboring site is unoccupied, the chosen particle is moved there; if the neighboring site is occupied, the system stays as it was. This is a reversible Markov chain on the r-sets of 1, 2, . . ., n with uniform stationary distribution. Liggett (1985) gives background and motivation (he focuses on infinite systems). Fill (1991) gives bounds on the second eigenvalue of the labeled exclusion process on the finite circle ZZn 1 We study this chain by comparison with a second Markov chain on r-sets that proceeds by picking a particle at random, picking an unoccupied site at random (not necessarily a neighboring site) and moving the particle to the unoccupied site. This is a well studied chain (the Bernoulli-Laplace model for diffusion). Its eigenvalues are known. We show that the comparison techniques apply to give upper bounds on the eigenvalues of the exclusion", "We study the random walk on the symmetric group (S_n ) generated by the conjugacy class of cycles of length k. We show that the convergence to uniform measure of this walk has a cut-off in total variation distance after ( n k n ) steps, uniformly in (k = o(n) ) as (n ). The analysis follows from a new asymptotic estimation of the characters of the symmetric group evaluated at cycles.", "", "We prove a conjecture raised by the work of Diaconis and Shahshahani (1981) about the mixing time of random walks on the permutation group induced by a given conjugacy class. To do this we exploit a connection with coalescence and fragmentation processes and control the Kantorovitch distance by using a variant of a coupling due to Oded Schramm. Recasting our proof in the language of Ricci curvature, our proof establishes the occurrence of a phase transition, which takes the following form in the case of random transpositions: at time @math , the curvature is asymptotically zero for @math and is strictly positive for @math ." ] }
1908.07085
2971040543
We present a learning-based method to estimate the object bounding box from its 2D bird's-eye view (BEV) LiDAR points. Our method, entitled BoxNet, exploits a simple deep neural network that can efficiently handle unordered points. The method takes as input the 2D coordinates of all the points and the output is a vector consisting of both the box pose (position and orientation in LiDAR coordinate system) and its size (width and length). In order to deal with the angle discontinuity problem, we propose to estimate the double-angle sinusoidal values rather than the angle itself. We also predict the center relative to the point cloud mean to boost the performance of estimating the location of the box. The proposed method does not rely on the ordering of points as in many existing approaches, and can accurately predict the actual size of the bounding box based on the prior information that is obtained from the training data. BoxNet is validated using the KITTI 3D object dataset, with significant improvement compared with the state-of-the-art non-learning based methods.
We omit the extensive literature review of 3D object detection methods, in which bounding box regression is considered as an essential component. Our proposed method only focuses on the 2D case and does not target an end-to-end solution. Therefore, it is in fact an intermediate step in the pipeline of 3D object detection. The goal is to provide accurate object bounding boxes for subsequent navigation and planning tasks. From this perspective, the problem defined in this paper is closely related to the L-shape fitting of 2D laser scanner data (or BEV LiDAR point cloud) in modeling a vehicle @cite_14 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2296035016", "2157664516" ], "abstract": [ "Situational awareness is crucial for autonomous driving in urban environments. This paper describes moving vehicle tracking module that we developed for our autonomous driving robot Junior. The robot won second place in the Urban Grand Challenge, an autonomous driving race organized by the U.S. Government in 2007. The tracking module provides reliable tracking of moving vehicles from a high-speed moving platform using laser range finders. Our approach models both dynamic and geometric properties of the tracked vehicles and estimates them using a single Bayes filter per vehicle. We also show how to build efficient 2D representations out of 3D range data and how to detect poorly visible black vehicles. Experimental validation includes the most challenging conditions presented at the UGC as well as other urban settings.", "The capability to use a moving sensor to detect moving objects and predict their future path enables both collision warning systems and autonomous navigation. This paper describes a system that combines linear feature extraction, tracking and a motion evaluator to accurately estimate motion of vehicles and pedestrians with a low rate of false motion reports. The tracker was used in a prototype collision warning system that was tested on two transit buses during 7000 km of regular passenger service" ] }
1908.06550
2964221281
Abstract In two earlier papers we derived congruence formats with regard to transition system specifications for weak semantics on the basis of a decomposition method for modal formulas. The idea is that a congruence format for a semantics must ensure that the formulas in the modal characterisation of this semantics are always decomposed into formulas that are again in this modal characterisation. The stability and divergence requirements that are imposed on many of the known weak semantics have so far been outside the realm of this method. Stability refers to the absence of a τ-transition. We show, using the decomposition method, how congruence formats can be relaxed for weak semantics that are stability-respecting. This relaxation for instance brings the priority operator within the range of the stability-respecting branching bisimulation format. Divergence, which refers to the presence of an infinite sequence of τ-transitions, escapes the inductive decomposition method. We circumvent this problem by proving that a congruence format for a stability-respecting weak semantics is also a congruence format for its divergence-preserving counterpart.
In @cite_30 @cite_0 he employs (OSOS) TSSs @cite_4 . An OSOS TSS allows no negative premises, but includes priorities between rules: @math means that @math can only be applied if @math cannot. An OSOS specification can be seen as, or translated into, a GSOS specification with negative premises. Each rule @math with exactly one higher-priority rule @math is replaced by a number of rules, one for each (positive) premise of @math ; in the copy of @math , this premise is negated. For a rule @math with multiple higher-priority rules @math , this replacement is carried out for each such @math .
{ "cite_N": [ "@cite_30", "@cite_0", "@cite_4" ], "mid": [ "82266486", "1540576960", "2152688610" ], "abstract": [ "", "We present a general class of process languages for rooted eager bisimulation preorder and prove its congruence result. Also, we present classes of process languages for the rooted versions of several other weak preorders. The process languages we propose are defined by the Ordered SOS method which combines the traditional SOS approach, based on transition rules, with the ordering on transition rules. This new feature specifies the order of application of rules when deriving transitions of process terms. We support and illustrate our results with examples of process operators and process languages which benefit from the OSOS formulation.", "Structured Operational Semantics (SOS) is a popular method for defining semantics by means of transition rules. An important feature of SOS rules is negative premises, which are crucial in the definitions of such phenomena as priority mechanisms and time-outs. However, the inclusion of negative premises in SOS rules also introduces doubts as to the preferred meaning of SOS specifications. Orderings on SOS rules were proposed by Phillips and Ulidowski as an alternative to negative premises. Apart from the definition of the semantics of positive GSOS rules with orderings, the meaning of more general types of SOS rules with orderings has not been studied hitherto. This paper presents several candidates for the meaning of general SOS rules with orderings and discusses their conformance to our intuition for such rules. We take two general frameworks (rule formats) for SOS with negative premises and SOS with orderings, and present semantics-preserving translations between them with respect to our preferred notion of semantics. Thanks to our semantics-preserving translation, we take existing congruence meta-results for strong bisimilarity from the setting of SOS with negative premises into the setting of SOS with orderings. We further compare the expressiveness of rule formats for SOS with orderings and SOS with negative premises. The paper contains also many examples that illustrate the benefits of SOS with orderings and the properties of the presented definitions of meaning." ] }
1908.06550
2964221281
Abstract In two earlier papers we derived congruence formats with regard to transition system specifications for weak semantics on the basis of a decomposition method for modal formulas. The idea is that a congruence format for a semantics must ensure that the formulas in the modal characterisation of this semantics are always decomposed into formulas that are again in this modal characterisation. The stability and divergence requirements that are imposed on many of the known weak semantics have so far been outside the realm of this method. Stability refers to the absence of a τ-transition. We show, using the decomposition method, how congruence formats can be relaxed for weak semantics that are stability-respecting. This relaxation for instance brings the priority operator within the range of the stability-respecting branching bisimulation format. Divergence, which refers to the presence of an infinite sequence of τ-transitions, escapes the inductive decomposition method. We circumvent this problem by proving that a congruence format for a stability-respecting weak semantics is also a congruence format for its divergence-preserving counterpart.
If patience rules are not allowed to have a lower priority than other rules, then the (r)bbo format, upon translation from OSOS to GSOS, can be seen as a subformat of our (rooted) stability-respecting branching bisimulation format. The basic idea is that in the rbbo format all arguments of so-called @math -preserving function symbols @cite_0 , which are the only ones allowed to occur in targets, are declared @math -liquid; in the bbo format, all arguments of function symbols are declared @math -liquid. Moreover, all arguments of function symbols that occur as the left-hand side of a positive premise are declared @math -liquid. Patience rules are in the (r)bbo format however, under strict conditions, allowed to be dominated by other rules, which in our setting gives rise to patience rules with negative premises. This is outside the realm of our rooted stability-respecting branching bisimulation format. On the other hand, the TSSs of the process algebra BPA @math , the binary Kleene star and deadlock testing (see @cite_19 @cite_5 ), for which rooted convergent branching bisimulation equivalence is a congruence, are outside the rbbo format but within the rooted stability-respecting branching bisimulation format.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_5" ], "mid": [ "1540576960", "2343655549", "2343519619" ], "abstract": [ "We present a general class of process languages for rooted eager bisimulation preorder and prove its congruence result. Also, we present classes of process languages for the rooted versions of several other weak preorders. The process languages we propose are defined by the Ordered SOS method which combines the traditional SOS approach, based on transition rules, with the ordering on transition rules. This new feature specifies the order of application of rules when deriving transitions of process terms. We support and illustrate our results with examples of process operators and process languages which benefit from the OSOS formulation.", "Earlier we presented a method to decompose modal formulas for processes with the internal action τ; congruence formats for branching and η-bisimilarity were derived on the basis of this decomposition method. The idea is that a congruence format for a semantics must ensure that formulas in the modal characterisation of this semantics are always decomposed into formulas in this modal characterisation. Here the decomposition method is enhanced to deal with modal characterisations that contain a modality 〈e〉〈a〉p, to derive congruence formats for delay and weak bisimilarity.", "Earlier we presented a method to decompose modal formulas for processes with the internal action τ , and congruence formats for branching and η -bisimilarity were derived on the basis of this decomposition method. The idea is that a congruence format for a semantics must ensure that the formulas in the modal characterisation of this semantics are always decomposed into formulas that are again in this modal characterisation. In this follow-up paper the decomposition method is enhanced to deal with modal characterisations that contain a modality 〈ϵ〉〈a〉φ 〈 ϵ 〉 〈 a 〉 φ , to derive congruence formats for delay and weak bisimilarity." ] }
1908.06560
2967118052
Until now, researchers have proposed several novel heterogeneous defect prediction HDP methods with promising performance. To the best of our knowledge, whether HDP methods can perform significantly better than unsupervised methods has not yet been thoroughly investigated. In this article, we perform a replication study to have a holistic look in this issue. In particular, we compare state-of-the-art five HDP methods with five unsupervised methods. Final results surprisingly show that these HDP methods do not perform significantly better than some of unsupervised methods (especially the simple unsupervised methods proposed by ) in terms of two non-effort-aware performance measures and four effort-aware performance measures. Then, we perform diversity analysis on defective modules via McNemar's test and find the prediction diversity is more obvious when the comparison is performed between the HDP methods and the unsupervised methods than the comparisons only between the HDP methods or between the unsupervised methods. This shows the HDP methods and the unsupervised methods are complementary to each other in identifying defective models to some extent. Finally, we investigate the feasibility of five HDP methods by considering two satisfactory criteria recommended by previous CPDP studies and find the satisfactory ratio of these HDP methods is still pessimistic. The above empirical results implicate there is still a long way for heterogeneous defect prediction to go. More effective HDP methods need to be designed and the unsupervised methods should be considered as baselines.
Researchers conducted a set of empirical studies to investigate the feasibility of CPDP by considering real-world software projects. @cite_75 analyzed 12 real-world projects from open-source communities and Microsoft corporation. After running 622 cross-project predictions, they found only 3.4 , @cite_65 analyzed another 10 open-source projects. After running 160,586 cross-project predictions, they found only 0.32 , @cite_1 @cite_42 still found the unsatisfactory performance of CPDP in the context of just-in-time (change-level) software defect prediction @cite_57 . @cite_48 investigated the feasibility of CPDP in terms of effort-aware performance measures (i.e., taking the limitation of available SQA resources into consideration). They surprisingly found that the performance of CPDP is no worse than WPDP and significantly better than the random method. Turhan @cite_9 summarized the reasons of poor performance of CPDP via dataset shift concept. He classified different forms of dataset shift into 6 categories, such as covariate shift, prior probability shift. His analysis forms the theoretical support for the follow-up studies on CPDP.
{ "cite_N": [ "@cite_48", "@cite_9", "@cite_42", "@cite_65", "@cite_1", "@cite_57", "@cite_75" ], "mid": [ "2074218040", "1964544799", "2276400542", "2008596407", "1994248747", "2147386665", "2046830558" ], "abstract": [ "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5 , 10 or 20 of files tested inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!", "A core assumption of any prediction model is that test data distribution does not differ from training data distribution. Prediction models used in software engineering are no exception. In reality, this assumption can be violated in many ways resulting in inconsistent and non-transferrable observations across different cases. The goal of this paper is to explain the phenomena of conclusion instability through the dataset shift concept from software effort and fault prediction perspective. Different types of dataset shift are explained with examples from software engineering, and techniques for addressing associated problems are discussed. While dataset shifts in the form of sample selection bias and imbalanced data are well-known in software engineering research, understanding other types is relevant for possible interpretations of the non-transferable results across different sites and studies. Software engineering community should be aware of and account for the dataset shift related issues when evaluating the validity of research outcomes.", "Unlike traditional defect prediction models that identify defect-prone modules, Just-In-Time (JIT) defect prediction models identify defect-inducing changes. As such, JIT defect models can provide earlier feedback for developers, while design decisions are still fresh in their minds. Unfortunately, similar to traditional defect models, JIT models require a large amount of training data, which is not available when projects are in initial development phases. To address this limitation in traditional defect prediction, prior work has proposed cross-project models, i.e., models learned from other projects with sufficient history. However, cross-project models have not yet been explored in the context of JIT prediction. Therefore, in this study, we empirically evaluate the performance of JIT models in a cross-project context. Through an empirical study on 11 open source projects, we find that while JIT models rarely perform well in a cross-project context, their performance tends to improve when using approaches that: (1) select models trained using other projects that are similar to the testing project, (2) combine the data of several other projects to produce a larger pool of training data, and (3) combine the models of several other projects to produce an ensemble model. Our findings empirically confirm that JIT models learned using other projects are a viable solution for projects with limited historical data. However, JIT models tend to perform best in a cross-project context when the data used to learn them are carefully selected.", "Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70 and a Precision greater than 50 ; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project.", "Prior research suggests that predicting defect-inducing changes, i.e., Just-In-Time (JIT) defect prediction is a more practical alternative to traditional defect prediction techniques, providing immediate feedback while design decisions are still fresh in the minds of developers. Unfortunately, similar to traditional defect prediction models, JIT models require a large amount of training data, which is not available when projects are in initial development phases. To address this flaw in traditional defect prediction, prior work has proposed cross-project models, i.e., models learned from older projects with sufficient history. However, cross-project models have not yet been explored in the context of JIT prediction. Therefore, in this study, we empirically evaluate the performance of JIT cross-project models. Through a case study on 11 open source projects, we find that in a JIT cross-project context: (1) high performance within-project models rarely perform well; (2) models trained on projects that have similar correlations between predictor and dependent variables often perform well; and (3) ensemble learning techniques that leverage historical data from several other projects (e.g., voting experts) often perform well. Our findings empirically confirm that JIT cross-project models learned using other projects are a viable solution for projects with little historical data. However, JIT cross-project models perform best when the data used to learn them is carefully selected.", "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.", "Prediction of software defects works well within projects as long as there is a sufficient amount of data available to train any models. However, this is rarely the case for new software projects and for many companies. So far, only a few have studies focused on transferring prediction models from one project to another. In this paper, we study cross-project defect prediction models on a large scale. For 12 real-world applications, we ran 622 cross-project predictions. Our results indicate that cross-project prediction is a serious challenge, i.e., simply using models from projects in the same domain or with the same process does not lead to accurate predictions. To help software engineers choose models wisely, we identified factors that do influence the success of cross-project predictions. We also derived decision trees that can provide early estimates for precision, recall, and accuracy before a prediction is attempted." ] }
1908.06709
2968767118
In automatic speech recognition, often little training data is available for specific challenging tasks, but training of state-of-the-art automatic speech recognition systems requires large amounts of annotated speech. To address this issue, we propose a two-staged approach to acoustic modeling that combines noise and reverberation data augmentation with transfer learning to robustly address challenges such as difficult acoustic recording conditions, spontaneous speech, and speech of elderly people. We evaluate our approach using the example of German oral history interviews, where a relative average reduction of the word error rate by 19.3 is achieved.
Applying data augmentation to training data is a common approach to increase the amount of training data in order to improve the robustness of a model. In ASR it can be used, e.g., to apply multi-condition training, when no real data in the desired condition is available. Data augmentation is, however, limited to acoustic effects that can be created in a sufficiently realistic manner - such as additive noise and reverberation. The data augmentation of reverberant speech for state-of-the-art LF-MMI models has been studied by @cite_15 . Several speed perturbation techniques to increase the training data variance have been investigated by @cite_20 . The proposed method in this work is to increase the data three-fold by creating two additional versions of each signal using the constant speed factors @math and @math ---a method that is used in many recent Kaldi training routines by default.
{ "cite_N": [ "@cite_15", "@cite_20" ], "mid": [ "2696967604", "2407080277" ], "abstract": [ "The environmental robustness of DNN-based acoustic models can be significantly improved by using multi-condition training data. However, as data collection is a costly proposition, simulation of the desired conditions is a frequently adopted strategy. In this paper we detail a data augmentation approach for far-field ASR. We examine the impact of using simulated room impulse responses (RIRs), as real RIRs can be difficult to acquire, and also the effect of adding point-source noises. We find that the performance gap between using simulated and real RIRs can be eliminated when point-source noises are added. Further we show that the trained acoustic models not only perform well in the distant-talking scenario but also provide better results in the close-talking scenario. We evaluate our approach on several LVCSR tasks which can adequately represent both scenarios.", "Data augmentation is a common strategy adopted to increase the quantity of training data, avoid overfitting and improve robustness of the models. In this paper, we investigate audio-level speech augmentation methods which directly process the raw signal. The method we particularly recommend is to change the speed of the audio signal, producing 3 versions of the original signal with speed factors of 0.9, 1.0 and 1.1. The proposed technique has a low implementation cost, making it easy to adopt. We present results on 4 different LVCSR tasks with training data ranging from 100 hours to 1000 hours, to examine the effectiveness of audio augmentation in a variety of data scenarios. An average relative improvement of 4.3 was observed across the 4 tasks." ] }
1908.06709
2968767118
In automatic speech recognition, often little training data is available for specific challenging tasks, but training of state-of-the-art automatic speech recognition systems requires large amounts of annotated speech. To address this issue, we propose a two-staged approach to acoustic modeling that combines noise and reverberation data augmentation with transfer learning to robustly address challenges such as difficult acoustic recording conditions, spontaneous speech, and speech of elderly people. We evaluate our approach using the example of German oral history interviews, where a relative average reduction of the word error rate by 19.3 is achieved.
Transfer learning is an approach used to transfer knowledge of a model trained in one scenario to train a model in another related scenario to improve generalization and performance @cite_14 . It is particularly useful in scenarios where only little training data is available for the main task but a large amount of annotated speech is available for a similar or related task. A detailed overview of transfer learning in speech and language processing is given by @cite_10 . Transfer learning for ASR systems using LF-MMI models has been studied by @cite_3 for many different common English speech recognition tasks.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_3" ], "mid": [ "391985582", "2963522845", "2786459654" ], "abstract": [ "", "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of ‘model adaptation’. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the ‘transfer’ can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field1.", "It is common in applications of ASR to have a large amount of data out-of-domain to the test data and a smaller amount of in-domain data similar to the test data. In this paper, we investigate different ways to utilize this out-of-domain data to improve ASR models based on Lattice-free MMI (LF-MMI). In particular, we experiment with multi-task training using a network with shared hidden layers; and we try various ways of adapting previously trained models to a new domain. Both types of methods are effective in reducing the WER versus in-domain models, with the jointly trained models generally giving more improvement." ] }
1908.06570
2966882610
First, a new perspective based on binary matrices of placement delivery array (PDA) design was introduced, by which the PDA design problem can be simplified. From this new perspective, and based on some families of combinatorial designs, new schemes with low subpacketization for centralized coded caching problem were constructed. We also give a technique for constructing new coded caching scheme from known schemes based on direct product of PDAs.
Although optimal in rate, the caching scheme in @cite_22 has its limitation in practical implementations: By this caching scheme, each file is divided into @math packets @math The number @math is also referred to as the file size or subpacketization in some literature. @math , which grows exponentially with @math @cite_9 . For practical application, it is important to construct coded caching scheme with smaller packet number.
{ "cite_N": [ "@cite_9", "@cite_22" ], "mid": [ "2517585112", "2106248279" ], "abstract": [ "We study a noiseless broadcast link serving @math users whose requests arise from a library of @math files. Every user is equipped with a cache of size @math files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most @math file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file @math scales to infinity. The asymptotic coding gain obtained is roughly @math . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets @math is a function of the system parameters @math , and @math . Specifically, we show that the existing random placement and clique cover delivery schemes that achieve optimality in the asymptotic regime can have at most a multiplicative gain of 2 even if the number of packets is exponential in the asymptotic gain @math . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of @math is at least @math . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters.", "Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters." ] }
1908.06570
2966882610
First, a new perspective based on binary matrices of placement delivery array (PDA) design was introduced, by which the PDA design problem can be simplified. From this new perspective, and based on some families of combinatorial designs, new schemes with low subpacketization for centralized coded caching problem were constructed. We also give a technique for constructing new coded caching scheme from known schemes based on direct product of PDAs.
So far, several coded caching schemes with reduced file size have been constructed, all with the sacrifice of increasing the rate. In @cite_2 , a class of coded caching schemes with linear file size @math i.e., @math were constructed from Ruzsa-Szem @math redi graphs. A very interesting framework for constructing centralized coded caching scheme, named design (or PDA design for simplicity), was introduced in @cite_14 , and some new classes of coded caching schemes were constructed by PDA design in references @cite_14 and @cite_6 . In @cite_5 , two classes of constant rate caching schemes with sub-exponential subpacketization were obtained from @math -free @math -partite hypergraphs. A more general class of coded caching schemes were constructed in @cite_7 from strong edge colored bipartite graphs. References @cite_1 and @cite_0 constructed some coded caching schemes using projective geometries over finite fields, and reference @cite_18 constructed some schemes based on resolvable combinatorial designs from certain linear block codes. Some classes of coded caching schemes from some other block designs, including balanced incomplete block designs (BIBDs), @math -designs and transversal designs (TDs), are obtained in @cite_17 . Summaries of known centralized coded caching schemes can be found in @cite_6 @cite_5 and @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2793858273", "2975151417", "2539811847", "", "2915582513", "2946479852", "", "2963720294", "" ], "abstract": [ "Coded caching is a technique that generalizes conventional caching and promises significant reductions in traffic over caching networks. However, the basic coded caching scheme requires that each file hosted in the server be partitioned into a large number (i.e., the subpacketization level) of non-overlapping subfiles. From a practical perspective, this is problematic as it means that prior schemes are only applicable when the size of the files is extremely large. In this paper, we propose coded caching schemes based on combinatorial structures called resolvable designs. These structures can be obtained in a natural manner from linear block codes whose generator matrices possess certain rank properties. We obtain several schemes with subpacketization levels substantially lower than the basic scheme at the cost of an increased rate. Depending on the system parameters, our approach allows us to operate at various points on the subpacketization level vs. rate tradeoff.", "Caching is a promising solution to satisfy the ever-increasing demands for the multi-media traffics. In caching networks, coded caching is a recently proposed technique that achieves significant performance gains over the uncoded caching schemes. However, to implement the coded caching schemes, each file has to be split into @math packets, which usually increases exponentially with the number of users @math . Thus, designing caching schemes that decrease the order of @math is meaningful for practical implementations. In this paper, by reviewing the Ali-Niesen caching scheme, the placement delivery array (PDA) design problem is first formulated to characterize the placement issue and the delivery issue with a single array. Moreover, we show that, through designing appropriate PDA, new centralized coded caching schemes can be discovered. Second, it is shown that the Ali-Niesen scheme corresponds to a special class of PDA, which realizes the best coding gain with the least @math . Third, we present a new construction of PDA for the centralized coded caching system, wherein the cache size @math at each user (identical cache size is assumed at all users) and the number of files @math satisfies @math or @math ( @math is an integer, such that @math ). The new construction can decrease the required @math from the order @math of Ali-Niesen scheme to @math or @math , respectively, while the coding gain loss is only 1.", "The technique of coded caching proposed by Madddah-Ali and Niesen is a promising approach to alleviate the load of networks during peak-traffic times. Recently, placement delivery array (PDA) was presented to characterize both the placement and delivery phase in a single array for the centralized coded caching algorithm. In this letter, we reinterpret PDA from a new perspective, i.e., the strong edge coloring of bipartite graph (bigraph). We prove that, a PDA is equivalent to a strong edge colored bigraph. Thus, we can construct a class of PDAs from existing structures in bigraphs. The class subsumes the scheme proposed by Maddah- and a more general class of PDAs proposed by as special cases.", "", "Coded caching scheme recently has become quite popular in the wireless network, since the maximum transmission amount @math reduces effectively during the peak-traffic times. To realize a coded caching scheme, each file must be divided into @math packets, which usually increases the computation complexity of a coded caching scheme. So we prefer to design a scheme with @math and @math as small as possible in practice. However, there exists a tradeoff between @math and @math . In this paper, we generalize the schemes constructed by (IEEE Transactions on Information Theory, 64, 5755–5766, 2018) and (IEEE Transactions on Information Theory 63, 5821–5833, 2017), respectively. These two classes of schemes have a wider range of application due to the more flexible memory size than the original ones. By comparing with the previous known deterministic schemes, our new schemes have advantages on @math or @math .", "Coded caching scheme, which is an effective technique to increase the transmission efficiency during peak traffic times, has recently become quite popular among the coding community. Generally rate can be measured to the transmission in the peak traffic times, i.e., this efficiency increases with the decreasing of rate. In order to implement a coded caching scheme, each file in the library must be split in a certain number of packets. And this number directly reflects the complexity of a coded caching scheme, i.e., the complexity increases with the increasing of the packet number. However there exists a tradeoff between the rate and packet number. So it is meaningful to characterize this tradeoff and design the related Pareto-optimal coded caching schemes with respect to both parameters. Recently, a new concept called placement delivery array (PDA) was proposed to characterize the coded caching scheme. However as far as we know no one has yet proved that one of the previously known PDAs is Pareto-optimal. In this paper, we first derive two lower bounds on the rate under the framework of PDA. Consequently, the PDA proposed by Maddah-Ali and Niesen is Pareto-optimal, and a tradeoff between rate and packet number is obtained for some parameters. Then, from the above observations and the view point of combinatorial design, two new classes of Pareto-optimal PDAs are obtained. Based on these PDAs, the schemes with low rate and packet number are obtained. Finally the performance of some previously known PDAs are estimated by comparing with these two classes of schemes.", "", "The centralized coded caching scheme is a technique proposed by Maddah-Ali and Niesen as a method to reduce the network burden in peak times in a wireless network system. reformulate the problem as designing a corresponding placement delivery array and propose two new schemes from this perspective. These schemes significantly reduce the rate compared with the uncoded caching schemes. However, to implement these schemes, each file should be cut into @math pieces, where @math grows exponentially with the number of users @math . Such a constraint is obviously infeasible in the practical setting, especially when @math is large. Thus, it is desirable to design caching schemes with constant rate @math (independent of @math ) as well as smaller @math . In this paper, we view the centralized coded caching problem in a hypergraph perspective and show that designing a feasible placement delivery array is equivalent to constructing a linear and (6,3)-free 3-uniform 3-partite hypergraph. Several new results and constructions arise from our novel point of view. First, by using the famous (6,3)-theorem in extremal graph theory, we show that constant rate placement delivery arrays with @math growing linearly with @math do not exist. Second, we present two infinite classes of placement delivery arrays to show that constant rate caching schemes with @math growing sub-exponentially with @math do exist.", "" ] }
1908.06209
2968708664
Energy minimization methods are a classical tool in a multitude of computer vision applications. While they are interpretable and well-studied, their regularity assumptions are difficult to design by hand. Deep learning techniques on the other hand are purely data-driven, often provide excellent results, but are very difficult to constrain to predefined physical or safety-critical models. A possible combination between the two approaches is to design a parametric energy and train the free parameters in such a way that minimizers of the energy correspond to desired solution on a set of training examples. Unfortunately, such formulations typically lead to bi-level optimization problems, on which common optimization algorithms are difficult to scale to modern requirements in data processing and efficiency. In this work, we present a new strategy to optimize these bi-level problems. We investigate surrogate single-level problems that majorize the target problems and can be implemented with existing tools, leading to efficient algorithms without collapse of the energy function. This framework of strategies enables new avenues to the training of parameterized energy minimization models from large data.
The straightforward way of optimizing bi-level problems is to consider @cite_49 @cite_2 @cite_57 . These methods directly differentiate the higher-level loss function with respect to the minimizing argument and descend in the direction of this gradient. An incomplete list of examples in image processing is @cite_29 @cite_23 @cite_7 @cite_94 @cite_100 @cite_76 @cite_64 @cite_86 @cite_58 . This strategy requires both the higher- and lower-level problems to be smooth and the minimizing map to be invertible. This is usually facilitated by implicit differentiation, as discussed in @cite_69 @cite_60 @cite_94 @cite_23 . In more generality, the problem of directly minimizing @math without assuming that smoothness in @math leads to optimization problems with equilibrium constraints (MPECs), see @cite_80 for a discussion in terms of machine learning or @cite_85 @cite_41 @cite_40 and @cite_57 . This approach also applies to the optimization layers of @cite_79 , which lend themselves well to a reformulation as a bi-level optimization problem.
{ "cite_N": [ "@cite_64", "@cite_41", "@cite_29", "@cite_85", "@cite_2", "@cite_58", "@cite_69", "@cite_60", "@cite_23", "@cite_49", "@cite_80", "@cite_7", "@cite_57", "@cite_79", "@cite_40", "@cite_86", "@cite_76", "@cite_94", "@cite_100" ], "mid": [ "2505728881", "", "", "", "2001759562", "2527769453", "2130248603", "1970163214", "2049893860", "2069900289", "2156417353", "1569761842", "2124659975", "2592457170", "", "2732710875", "", "2962932385", "" ], "abstract": [ "Some recent works in machine learning and computer vision involve the solution of a bi-level optimization problem. Here the solution of a parameterized lower-level problem binds variables that appear in the objective of an upper-level problem. The lower-level problem typically appears as an argmin or argmax optimization problem. Many techniques have been proposed to solve bi-level optimization problems, including gradient descent, which is popular with current end-to-end learning approaches. In this technical report we collect some results on differentiating argmin and argmax optimization problems with and without constraints and provide some insightful motivating examples.", "", "", "", "In this paper, we give necessary optimality conditions for the nonlinear bilevel programming problem. Furthermore, at each feasible point, we show that the steepest descent direction is obtained by solving a quadratic bilevel programming problem. We give indication that this direction can be used to develop a descent algorithm for the nonlinear bilevel problem.", "Based on the weighted total variation model and its analysis pursued in Hintermuller and Rautenberg 2016, in this paper a continuous, i.e., infinite dimensional, projected gradient algorithm and its convergence analysis are presented. The method computes a stationary point of a regularized bilevel optimization problem for simultaneously recovering the image as well as determining a spatially distributed regularization weight. Further, its numerical realization is discussed and results obtained for image denoising and deblurring as well as Fourier and wavelet inpainting are reported on.", "We present a new approach for the discriminative training of continuous-valued Markov Random Field (MRF) model parameters. In our approach we train the MRF model by optimizing the parameters so that the minimum energy solution of the model is as similar as possible to the ground-truth. This leads to parameters which are directly optimized to increase the quality of the MAP estimates during inference. Our proposed technique allows us to develop a framework that is flexible and intuitively easy to understand and implement, which makes it an attractive alternative to learn the parameters of a continuous-valued MRF model. We demonstrate the effectiveness of our technique by applying it to the problems of image denoising and in-painting using the Field of Experts model. In our experiments, the performance of our system compares favourably to the Field of Experts model trained using contrastive divergence when applied to the denoising and in-painting tasks.", "In this work we consider the problem of parameter learning for variational image denoising models. The learning problem is formulated as a bilevel optimization problem, where the lower-level problem is given by the variational model and the higher-level problem is expressed by means of a loss function that penalizes errors between the solution of the lower-level problem and the ground truth data. We consider a class of image denoising models incorporating @math -norm--based analysis priors using a fixed set of linear operators. We devise semismooth Newton methods for solving the resulting nonsmooth bilevel optimization problems and show that the optimized image denoising models can achieve state-of-the-art performance.", "This paper addresses a new learning algorithm for the recently introduced co-sparse analysis model. First, we give new insights into the co-sparse analysis model by establishing connections to filter-based MRF models, such as the field of experts model of Roth and Black. For training, we introduce a technique called bi-level optimization to learn the analysis operators. Compared with existing analysis operator learning approaches, our training procedure has the advantage that it is unconstrained with respect to the analysis operator. We investigate the effect of different aspects of the co-sparse analysis model and show that the sparsity promoting function (also called penalty function) is the most important factor in the model. In order to demonstrate the effectiveness of our training approach, we apply our trained models to various classical image restoration problems. Numerical experiments show that our trained models clearly outperform existing analysis operator learning approaches and are on par with state-of-the-art image denoising algorithms. Our approach develops a framework that is intuitive to understand and easy to implement.", "A bilevel program is a mathematical program involving functions defined implicitly as solutions to another mathematical program. We discuss a method for extracting derivative information on the implicit function, which is especially efficient when the lower-level problem has simple bounds on the variables and or many inactive constraints. Computational experience on problems with up to 230 variables and 30 constraints is presented.", "We examine the interplay of optimization and machine learning. Great progress has been made in machine learning by cleverly reducing machine learning problems to convex optimization problems with one or more hyper-parameters. The availability of powerful convex-programming theory and algorithms has enabled a flood of new research in machine learning models and methods. But many of the steps necessary for successful machine learning models fall outside of the convex machine learning paradigm. Thus we now propose framing machine learning problems as Stackelberg games. The resulting bilevel optimization problem allows for efficient systematic search of large numbers of hyper-parameters. We discuss recent progress in solving these bilevel problems and the many interesting optimization challenges that remain. Finally, we investigate the intriguing possibility of novel machine learning models enabled by bilevel programming.", "We consider the analysis operator and synthesis dictionary learning problems based on the the @math regularized sparse representation model. We reveal the internal relations between the @math -based analysis model and synthesis model. We then introduce an approach to learn both analysis operator and synthesis dictionary simultaneously by using a unified framework of bi-level optimization. Our aim is to learn a meaningful operator (dictionary) such that the minimum energy solution of the analysis (synthesis)-prior based model is as close as possible to the ground-truth. We solve the bi-level optimization problem using the implicit differentiation technique. Moreover, we demonstrate the effectiveness of our leaning approach by applying the learned analysis operator (dictionary) to the image denoising task and comparing its performance with state-of-the-art methods. Under this unified framework, we can compare the performance of the two types of priors.", "This paper is devoted to bilevel optimization, a branch of mathematical programming of both practical and theoretical interest. Starting with a simple example, we proceed towards a general formulation. We then present fields of application, focus on solution approaches, and make the connection with MPECs (Mathematical Programs with Equilibrium Constraints).", "This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. In this paper, we explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, we show that the method is capable of learning to play mini-Sudoku (4x4) given just input and output games, with no a priori information about the rules of the game; this highlights the ability of our architecture to learn hard constraints better than other neural architectures.", "", "A weighted total variation model with a spatially varying regularization weight is considered. Existence of a solution is shown, and the associated Fenchel predual problem is derived. For automatically selecting the regularization function, a bilevel optimization framework is proposed. In this context, the lower-level problem, which is parameterized by the regularization weight, is the Fenchel predual of the weighted total variation model and the upper-level objective penalizes violations of a variance corridor. The latter object relies on a localization of the image residual as well as on lower and upper bounds inspired by the statistics of the extremes.", "", "Inpainting based image compression ap- proaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic inter- polation model, and it can recover an image with very high quality from only 5 pixels.", "" ] }
1908.06209
2968708664
Energy minimization methods are a classical tool in a multitude of computer vision applications. While they are interpretable and well-studied, their regularity assumptions are difficult to design by hand. Deep learning techniques on the other hand are purely data-driven, often provide excellent results, but are very difficult to constrain to predefined physical or safety-critical models. A possible combination between the two approaches is to design a parametric energy and train the free parameters in such a way that minimizers of the energy correspond to desired solution on a set of training examples. Unfortunately, such formulations typically lead to bi-level optimization problems, on which common optimization algorithms are difficult to scale to modern requirements in data processing and efficiency. In this work, we present a new strategy to optimize these bi-level problems. We investigate surrogate single-level problems that majorize the target problems and can be implemented with existing tools, leading to efficient algorithms without collapse of the energy function. This framework of strategies enables new avenues to the training of parameterized energy minimization models from large data.
is a prominent strategy in applied bi-level optimization across fields, i.e. MRF literature @cite_59 @cite_10 in deep learning @cite_11 @cite_28 @cite_14 @cite_87 and in variational settings @cite_71 @cite_97 @cite_54 @cite_26 @cite_75 @cite_39 . The problem is transformed into a single level problem by choosing an optimization algorithm @math that produces an approximate solution to the lower level problem after a fixed number of iterations. @math is then replaced by @math . Automatic differentiation @cite_61 allows for an efficient evaluation of the gradient of the upper-level loss w.r.t to this reduced objective In general these strategies are very successful in practice, they combine the model and its optimization method into a single feed-forward process, where the model is again only implicitly present. Later works @cite_73 @cite_89 @cite_102 @cite_75 allow the lower-level parameters to change in between the fixed number of iterations, leading to structures that model differential equations and stray further from underlying modelling. As pointed out in @cite_17 , these strategies are more aptly considered as a set of nested quadratic lower-level problems.
{ "cite_N": [ "@cite_61", "@cite_14", "@cite_26", "@cite_75", "@cite_87", "@cite_28", "@cite_97", "@cite_54", "@cite_102", "@cite_17", "@cite_39", "@cite_89", "@cite_59", "@cite_71", "@cite_73", "@cite_10", "@cite_11" ], "mid": [ "1585773866", "2333621733", "2604388535", "", "", "2412782625", "2899901372", "2783844464", "", "2437754452", "", "1906770428", "2147535523", "2198630576", "2953319141", "", "" ], "abstract": [ "Algorithmic, or automatic, differentiation (AD) is a growing area of theoretical research and software development concerned with the accurate and efficient evaluation of derivatives for function evaluations given as computer programs. The resulting derivative values are useful for all scientific computations that are based on linear, quadratic, or higher order approximations to nonlinear scalar or vector functions. AD has been applied in particular to optimization, parameter identification, nonlinear equation solving, the numerical integration of differential equations, and combinations of these. Apart from quantifying sensitivities numerically, AD also yields structural dependence information, such as the sparsity pattern and generic rank of Jacobian matrices. The field opens up an exciting opportunity to develop new algorithms that reflect the true cost of accurate derivatives and to use them for improvements in speed and reliability. This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity. There is also added material on checkpointing and iterative differentiation. To improve readability the more detailed analysis of memory and complexity bounds has been relegated to separate, optional chapters.The book consists of three parts: a stand-alone introduction to the fundamentals of AD and its software; a thorough treatment of methods for sparse problems; and final chapters on program-reversal schedules, higher derivatives, nonsmooth problems and iterative processes. Each of the 15 chapters concludes with examples and exercises. Audience: This volume will be valuable to designers of algorithms and software for nonlinear computational problems. Current numerical software users should gain the insight necessary to choose and deploy existing AD software tools to the best advantage. Contents: Rules; Preface; Prologue; Mathematical Symbols; Chapter 1: Introduction; Chapter 2: A Framework for Evaluating Functions; Chapter 3: Fundamentals of Forward and Reverse; Chapter 4: Memory Issues and Complexity Bounds; Chapter 5: Repeating and Extending Reverse; Chapter 6: Implementation and Software; Chapter 7: Sparse Forward and Reverse; Chapter 8: Exploiting Sparsity by Compression; Chapter 9: Going beyond Forward and Reverse; Chapter 10: Jacobian and Hessian Accumulation; Chapter 11: Observations on Efficiency; Chapter 12: Reversal Schedules and Checkpointing; Chapter 13: Taylor and Tensor Coefficients; Chapter 14: Differentiation without Differentiability; Chapter 15: Implicit and Iterative Differentiation; Epilogue; List of Figures; List of Tables; Assumptions and Definitions; Propositions, Corollaries, and Lemmas; Bibliography; Index", "In this work we propose a structured prediction technique that combines the virtues of Gaussian Conditional Random Fields (G-CRF) with Deep Learning: (a) our structured prediction task has a unique global optimum that is obtained exactly from the solution of a linear system (b) the gradients of our model parameters are analytically computed using closed form expressions, in contrast to the memory-demanding contemporary deep structured prediction approaches [1, 2] that rely on back-propagation-through-time, (c) our pairwise terms do not have to be simple hand-crafted expressions, as in the line of works building on the DenseCRF [1, 3], but can rather be ‘discovered’ from data through deep architectures, and (d) out system can trained in an end-to-end manner. Building on standard tools from numerical analysis we develop very efficient algorithms for inference and learning, as well as a customized technique adapted to the semantic segmentation task. This efficiency allows us to explore more sophisticated architectures for structured prediction in deep learning: we introduce multi-resolution architectures to couple information across scales in a joint optimization framework, yielding systematic improvements. We demonstrate the utility of our approach on the challenging VOC PASCAL 2012 image segmentation benchmark, showing substantial improvements over strong baselines. We make all of our code and experiments available at https: github.com siddharthachandra gcrf.", "PURPOSE:To allow fast and high-quality reconstruction of clinical accelerated multi-coil MR data by learning a variational network that combines the mathematical structure of variational models with deep learning. THEORY AND METHODS:Generalized compressed sensing reconstruction formulated as a variational model is embedded in an unrolled gradient descent scheme. All parameters of this formulation, including the prior model defined by filter kernels and activation functions as well as the data term weights, are learned during an offline training procedure. The learned model can then be applied online to previously unseen data. RESULTS:The variational network approach is evaluated on a clinical knee imaging protocol for different acceleration factors and sampling patterns using retrospectively and prospectively undersampled data. The variational network reconstructions outperform standard reconstruction algorithms, verified by quantitative error measures and a clinical reader study for regular sampling and acceleration factor 4. CONCLUSION:Variational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, that is, reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow. Magn Reson Med 79:3055-3071, 2018. © 2017 International Society for Magnetic Resonance in Medicine.", "", "", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Semantic segmentation and other pixel-level labeling tasks have made significant progress recently due to the deep learning paradigm. Many state-of-the-art structured prediction methods also include a random field model with a hand-crafted Gaussian potential to model spatial priors and label consistencies and feature-based image conditioning. These random field models with image conditioning typically require computationally demanding filtering techniques during inference. In this paper, we present a new inference and learning framework which can learn arbitrary pairwise conditional random field (CRF) potentials. Both standard spatial and high-dimensional bilateral kernels are considered. In addition, we introduce a new type of potential function which is image-dependent like the bilateral kernel, but an order of magnitude faster to compute since only spatial convolutions are employed. It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label-class interactions are indeed better modeled by a non-Gaussian potential. Our framework is evaluated on several public benchmarks for semantic segmentation with improved performance compared to previous state-of-the-art CNN+CRF models.", "Are we using the right potential functions in the Conditional Random Field models that are popular in the Vision community? Semantic segmentation and other pixel-level labelling tasks have made significant progress recently due to the deep learning paradigm. However, most state-of-the-art structured prediction methods also include a random field model with a hand-crafted Gaussian potential to model spatial priors, label consistencies and feature-based image conditioning. In this paper, we challenge this view by developing a new inference and learning framework which can learn pairwise CRF potentials restricted only by their dependence on the image pixel values and the size of the support. Both standard spatial and high-dimensional bilateral kernels are considered. Our framework is based on the observation that CRF inference can be achieved via projected gradient descent and consequently, can easily be integrated in deep neural networks to allow for end-to-end training. It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential. In addition, we compare our inference method to the commonly used mean-field algorithm. Our framework is evaluated on several public benchmarks for semantic segmentation with improved performance compared to previous state-of-the-art CNN+CRF models.", "", "Demosaicing is an important first step for color image acquisition. For practical reasons, demosaicing algorithms have to be both efficient and yield high quality results in the presence of noise. The demosaicing problem poses several challenges, e.g. zippering and false color artifacts as well as edge blur. In this work, we introduce a novel learning based method that can overcome these challenges. We formulate demosaicing as an image restoration problem and propose to learn efficient regularization inspired by a variational energy minimization framework that can be trained for different sensor layouts. Our algorithm performs joint demosaicing and denoising in close relation to the real physical mosaicing process on a camera sensor. This is achieved by learning a sequence of energy minimization problems composed of a set of RGB filters and corresponding activation functions. We evaluate our algorithm on the Microsoft Demosaicing data set in terms of peak signal to noise ratio (PSNR) and structured similarity index (SSIM). Our algorithm is highly efficient both in image quality and run time. We achieve an improvement of up to 2.6 dB over recent state-of-the-art algorithms.", "", "Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters ( i.e. , linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD— Trainable Nonlinear Reaction Diffusion . The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.", "Many computer vision problems can be formulated in a Bayesian framework with Markov Random Field (MRF) or Conditional Random Field (CRF) priors. Usually, the model assumes that a full Maximum A Posteriori (MAP) estimation will be performed for inference, which can be really slow in practice. In this paper, we argue that through appropriate training, a MRF CRF model can be trained to perform very well on a suboptimal inference algorithm. The model is trained together with a fast inference algorithm through an optimization of a loss function on a training set containing pairs of input images and desired outputs. A validation set can be used in this approach to estimate the generalization performance of the trained system. We apply the proposed method to an image denoising application, training a Fields of Experts MRF together with a 1-4 iteration gradient descent inference algorithm. Experimental validation on unseen data shows that the proposed training approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF trained with contrastive divergence. Using the new approach, image denoising can be performed in real-time, at 8 fps on a single CPU for a 256 × 256 image sequence, with close to state-of-the-art accuracy.", "We consider a bilevel optimization approach for parameter learning in nonsmooth variational models. Existing approaches solve this problem by applying implicit differentiation to a sufficiently smooth approximation of the nondifferentiable lower level problem. We propose an alternative method based on differentiating the iterations of a nonlinear primal–dual algorithm. Our method computes exact (sub)gradients and can be applied also in the nonsmooth setting. We show preliminary results for the case of multi-label image segmentation.", "For several decades, image restoration remains an active research topic in low-level computer vision and hence new approaches are constantly emerging. However, many recently proposed algorithms achieve state-of-the-art performance only at the expense of very high computation time, which clearly limits their practical relevance. In this work, we propose a simple but effective approach with both high computational efficiency and high restoration quality. We extend conventional nonlinear reaction diffusion models by several parametrized linear filters as well as several parametrized influence functions. We propose to train the parameters of the filters and the influence functions through a loss based approach. Experiments show that our trained nonlinear reaction diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for image restoration. Due to their structural simplicity, our trained models are highly efficient and are also well-suited for parallel computation on GPUs.", "", "" ] }
1908.06459
2969194430
Let @math be a discrete time Markov chain on a general state space. It is well-known that if @math is aperiodic and satisfies a drift and minorization condition, then it converges to its stationary distribution @math at an exponential rate. We consider the problem of computing upper bounds for the distance from stationarity in terms of the drift and minorization data. Baxendale showed that these bounds improve significantly if one assumes that @math is reversible with nonnegative eigenvalues (i.e. its transition kernel is a self-adjoint operator on @math with spectrum contained in @math ). We identify this phenomenon as a special case of a general principle: for a reversible chain with nonnegative eigenvalues, any strong random time gives direct control over the convergence rate. We formulate this principle precisely and deduce from it a stronger version of Baxendale's result. Our approach is fully quantitative and allows us to convert drift and minorization data into explicit convergence bounds. We show that these bounds are tighter than those of Rosenthal and Baxendale when applied to a well-studied example.
In the case @math , the decay rate @math of the law of @math was identified by Roberts and Tweedie @cite_18 (who use the notation @math ). Theorem 4.1(i) of @cite_18 is equivalent to a bound of the form [ (T > t) ( const ) (V)^r t ^t. ] Theorem slightly improves this result by removing the factor of @math and generalizing to the case @math .
{ "cite_N": [ "@cite_18" ], "mid": [ "1547388036" ], "abstract": [ "In many applications of Markov chains, and especially in Markov chain Monte Carlo algorithms, the rate of convergence of the chain is of critical importance. Most techniques to establish such rates require bounds on the distribution of the random regeneration time T that can be constructed, via splitting techniques, at times of return to a \"small set\" C satisfying a minorisation condition P(x,·)[greater-or-equal, slanted][var epsilon][phi](·), x[set membership, variant]C. Typically, however, it is much easier to get bounds on the time [tau]C of return to the small set itself, usually based on a geometric drift function , where . We develop a new relationship between T and [tau]C, and this gives a bound on the tail of T, based on [var epsilon],[lambda] and b, which is a strict improvement on existing results. When evaluating rates of convergence we see that our bound usually gives considerable numerical improvement on previous expressions." ] }
1908.06459
2969194430
Let @math be a discrete time Markov chain on a general state space. It is well-known that if @math is aperiodic and satisfies a drift and minorization condition, then it converges to its stationary distribution @math at an exponential rate. We consider the problem of computing upper bounds for the distance from stationarity in terms of the drift and minorization data. Baxendale showed that these bounds improve significantly if one assumes that @math is reversible with nonnegative eigenvalues (i.e. its transition kernel is a self-adjoint operator on @math with spectrum contained in @math ). We identify this phenomenon as a special case of a general principle: for a reversible chain with nonnegative eigenvalues, any strong random time gives direct control over the convergence rate. We formulate this principle precisely and deduce from it a stronger version of Baxendale's result. Our approach is fully quantitative and allows us to convert drift and minorization data into explicit convergence bounds. We show that these bounds are tighter than those of Rosenthal and Baxendale when applied to a well-studied example.
The most important feature of Theorem and its @math -norm version, Theorem , is that the exponential rate @math is the same as the decay rate in Theorem . As we will see in Section , this conclusion can only be drawn for reversible Markov chains with nonnegative eigenvalues and does not hold in general. Baxendale @cite_11 was the first to observe this consequence of reversibility. For a key argument, he credits a comment of Meyn on a previous draft of @cite_11 . In the case @math , Theorem is very similar to [Theorem 1.3] B05 : both theorems have the same hypotheses, and both prove @math -norm convergence of the chain with the same exponential rate @math .
{ "cite_N": [ "@cite_11" ], "mid": [ "2122337269" ], "abstract": [ "We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are dierent from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start from essentially the same assumptions of a drift condition towards a “small set”. The estimates show a noticeable improvement on existing results if the Markov chain is reversible with respect to its stationary distribution, and especially so if the chain is also positive. The method of proof uses the first-entrance last-exit decomposition, together with new quantitative versions of a result of Kendall from discrete renewal theory." ] }
1908.06459
2969194430
Let @math be a discrete time Markov chain on a general state space. It is well-known that if @math is aperiodic and satisfies a drift and minorization condition, then it converges to its stationary distribution @math at an exponential rate. We consider the problem of computing upper bounds for the distance from stationarity in terms of the drift and minorization data. Baxendale showed that these bounds improve significantly if one assumes that @math is reversible with nonnegative eigenvalues (i.e. its transition kernel is a self-adjoint operator on @math with spectrum contained in @math ). We identify this phenomenon as a special case of a general principle: for a reversible chain with nonnegative eigenvalues, any strong random time gives direct control over the convergence rate. We formulate this principle precisely and deduce from it a stronger version of Baxendale's result. Our approach is fully quantitative and allows us to convert drift and minorization data into explicit convergence bounds. We show that these bounds are tighter than those of Rosenthal and Baxendale when applied to a well-studied example.
The method of proof in @cite_11 uses analytic properties of generating functions for renewal sequences. In principle the argument could be extended to the case @math , but the resulting bound on the exponential convergence rate of the chain would be worse than the rate @math from Theorem . Intuitively, this is because the law of @math might introduce artificial periodicity. Our approach using Theorem is probabilistic and puts all cases @math on the same footing.
{ "cite_N": [ "@cite_11" ], "mid": [ "2122337269" ], "abstract": [ "We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are dierent from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start from essentially the same assumptions of a drift condition towards a “small set”. The estimates show a noticeable improvement on existing results if the Markov chain is reversible with respect to its stationary distribution, and especially so if the chain is also positive. The method of proof uses the first-entrance last-exit decomposition, together with new quantitative versions of a result of Kendall from discrete renewal theory." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Several rotorcraft UAS that are capable of autonomous, long-duration mission execution in benign indoor (VICON) environments have previously appeared in literature. Focusing on the recharging solution to extend individual platform flight time and a multi-agent scheme for constant operation, impressive operation times have been demonstrated ( @cite_28 : 24 h single vehicle experiment; @cite_5 : 9.5 h with multiple vehicles). Recharging methods vary from wireless charging @cite_56 to contact-based charging pads @cite_5 to battery swap systems @cite_27 @cite_38 . While wireless charging offers the most flexibility since no physical contact has to be made, charging currents are low resulting in excessively long charging times and are hence not an option for quick redeployment. However, interesting results have been shown in @cite_56 demonstrating wireless powering of a 35 g 10 W micro drone in hover flight. On the other end of the spectrum, battery swap systems offer immediate redeployment of a UAS, but require sophisticated mechanisms to be able to hot swap a battery, and a pool of charged batteries that are readily available. This makes such systems less attractive for maintenance free and cost effective long-term operation.
{ "cite_N": [ "@cite_38", "@cite_28", "@cite_56", "@cite_27", "@cite_5" ], "mid": [ "2126649543", "2104184073", "2621390870", "2089051263", "2089829108" ], "abstract": [ "Future Unmanned Aircraft Systems (UASs) are expected to be nearly autonomous and composed of heterogeneous Unmanned Aerial Vehicles (UAVs). While most of the current research focuses on UAV avionics and control algorithms, ground task automation has come to the attention of researchers during the past few years. Ground task automation not only relieves human operators, but may also expand the UAS operation area, improve system coverage and enable operation in risky environments without posing a threat to humans. We propose a model to evaluate the coverage of a given UAS. We also compare different solutions for various modules of an automated battery replacement system for UAVs. In addition, we propose a ground station capable of swapping a UAV's batteries, followed by a discussion of prototype components and tests of some of the prototype modules. The proposed platform is well-suited for high-coverage requirements and is capable of handling a heterogeneous UAV fleet.", "This paper presents the development and implementation of techniques used to manage autonomous unmanned aerial vehicles (UAVs) performing 24 7 persistent surveillance operations. Using an indoor flight testbed, flight test results are provided to demonstrate the complex issues encountered by operators and mission managers when executing an extended persistent surveillance operation in realtime. This paper presents mission health monitors aimed at identifying and improving mission system performance to avoid down time, increase mission system eciency and reduce operator loading. This paper discusses the infrastructure needed to execute an autonomous persistent surveillance operation and presents flight test results from one of our recent automated UAV recharging experiments. Using the RAVEN at MIT, we present flight test results from a 24 hr, fully-autonomous air vehicle flight-recharge test and an autonomous, multi-vehicle extended mission test using small, electric-powered air vehicles.", "Recent developments in high frequency inductive wireless power transfer (WPT) mean that the technology has reached a point where powering small autonomous drones has become feasible. Fundamentally, drones can only carry very limited payloads and thus require light-weight WPT receiver solutions. The key to achieving light weight is operating the WPT system at high frequency: this allows both the coils and the electronics to achieve very high gravametric power densities. When operated in the MHz region, the WPT coils can be manufactured without the need for ferrite, because the low coupling factor can be offset by very high coil Q factors. To make efficient MHz power conversion circuits, wide band-gap semiconductors, including Silicon Carbide (SiC) and Gallium Nitride (GaN) have provided a step change. For powering a drone, these devices are integrated into soft-switching resonant inverter and rectifier topologies and are able to operate efficiently at tens of MHz and hundreds of watts. In addition, recently discovered designs that make these inverters tolerant to load variations and have inherent voltage or current regulation features enable an WPT system to operate effectively as the separation distance and alignment between transmitter and receiver changes, which is critical when charging a flying drone. It will be shown that combining all these individual developments has enabled the charging of a 10 W micro-drone whilst hovering in the vicinity of a charging pad.", "", "The past decade has seen an increased interest towards research involving Autonomous Micro Aerial Vehicles (MAVs). The predominant reason for this is their agility and ability to perform tasks too difficult or dangerous for their human counterparts and to navigate into places where ground robots cannot reach. Among MAVs, rotary wing aircraft such as quadrotors have the ability to operate in confined spaces, hover at a given point in space and perch1 or land on a flat surface. This makes the quadrotor a very attractive aerial platform giving rise to a myriad of research opportunities. The potential of these aerial platforms is severely limited by the constraints on the flight time due to limited battery capacity. This in turn arises from limits on the payload of these rotorcraft. By automating the battery recharging process, creating autonomous MAVs that can recharge their on-board batteries without any human intervention and by employing a team of such agents, the overall mission time can be greatly increased. This paper describes the development, testing, and implementation of a system of autonomous charging stations for a team of Micro Aerial Vehicles. This system was used to perform fully autonomous long-term multi-agent aerial surveillance experiments with persistent station keeping. The scalability of the algorithm used in the experiments described in this paper was also tested by simulating a persistence surveillance scenario for 10 MAVs and charging stations. Finally, this system was successfully implemented to perform a 9½ hour multi-agent persistent flight test. Preliminary implementation of this charging system in experiments involving construction of cubic structures with quadrotors showed a three-fold increase in effective mission time." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Our work uses a downward-facing monocular camera to estimate the pose of the landing pad in the world frame using AprilTag visual fiducial markers @cite_60 . We believe that a monocular camera is the cheapest, most lightweight and power efficient sensor choice. Alternatively, GPS and RTK-GPS systems suffer from precision degradation and signal loss in occluded urban or canyon environments @cite_42 . Laser range finder systems are heavy and consume considerable amounts of energy. Last but not least, stereo camera setups have limited range as a function of their baseline versus vehicle-to-pad distance.
{ "cite_N": [ "@cite_42", "@cite_60" ], "mid": [ "2509603109", "2565233142" ], "abstract": [ "Nowadays, quadrotor is playing a growing number of critical roles in a great many of aspects. A crucial ability is the autonomous landing ability onto a target such as a fixed platform, moving platform, or ship deck platform, employing on-board vision. Researching autonomous landing technologies employing vision for quadrotor have grown up to be the second active area of research over the past years. This article primarily gives the foremost research groups involved in the development of on-board vision autonomous landing technologies for quadrotor, then it discusses the specifics of two factors are taken into account for a smooth landing: sensing and control. The aim of this article is to examine the state-of-the art autonomous landing technologies employing on-board vision that capture entire milestones and seminal works. Last but not least, the article stands out three challenging parts in this research field.", "AprilTags and other passive fiducial markers require specialized algorithms to detect markers among other features in a natural scene. The vision processing steps generally dominate the computation time of a tag detection pipeline, so even small improvements in marker detection can translate to a faster tag detection system. We incorporated lessons learned from implementing and supporting the AprilTag system into this improved system. This work describes AprilTag 2, a completely redesigned tag detector that improves robustness and efficiency compared to the original AprilTag system. The tag coding scheme is unchanged, retaining the same robustness to false positives inherent to the coding system. The new detector improves performance with higher detection rates, fewer false positives, and lower computational time. Improved performance on small images allows the use of decimated input images, resulting in dramatic gains in detection speed." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Several landing approaches exist in literature for labeled and unlabeled landing sites. @cite_34 present a monocular visual landing method based on estimating the 6 DOF pose of a circled H marker. The same authors extend this work to enable autonomous landing site search by using a scale-corrected PTAM algorithm @cite_58 and relax the landing site structure to an arbitrary but feature-rich image that is matched using the ORB algorithm @cite_8 . @cite_9 @cite_7 use SVO to estimate motion from a downfacing monocular camera to build a probabilistic 2 dimensional elevation map of unstructured terrain and to detect safe landing spots based on a score function. @cite_22 @cite_59 develop a fully self-contained visual landing navigation pipeline using a single camera. With application to landing in urban environments, the planar character of a rooftop is leveraged to perform landing site detection via a planar homography decomposition using RANSAC to distinguish the ground and elevated landing surface planes.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_58", "@cite_59", "@cite_34" ], "mid": [ "2157723537", "1602540331", "2117228865", "1970504153", "2151290401", "2086123859", "1997522477" ], "abstract": [ "Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.", "In this paper, we propose a resource-efficient system for real-time 3D terrain reconstruction and landing-spot detection for micro aerial vehicles. The system runs on an on-board smartphone processor and requires only the input of a single downlooking camera and an inertial measurement unit. We generate a two-dimensional elevation map that is probabilistic, of fixed size, and robot-centric, thus, always covering the area immediately underneath the robot. The elevation map is continuously updated at a rate of 1 Hz with depth maps that are triangulated from multiple views using recursive Bayesian estimation. To highlight the usefulness of the proposed mapping framework for autonomous navigation of micro aerial vehicles, we successfully demonstrate fully autonomous landing including landing-spot detection in real-world experiments.", "Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.", "We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.", "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.", "Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.", "In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter \"H\" surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter \"H\". The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Currently one of the most popular visual fiducial detectors and patterns is the AprilTag algorithm @cite_15 which is renowned for its speed, robustness and extremely low false positive detection rates. The algorithm was updated by @cite_60 to further improve computational efficiency and to enable the detection of smaller tags. AprilTag has been applied multiple times for MAV landing @cite_31 @cite_0 @cite_46 @cite_48 @cite_47 . Similar visual fiducials are seen around the landing zones of both Amazon and Google delivery drones @cite_45 @cite_29 as well as of some surveying drones @cite_57 . Our approach is the same as that of @cite_31 . We improve over @cite_0 @cite_46 by using a bundle of several tags for improved landing pad pose measurement accuracy. @cite_48 @cite_47 appear to also use a tag bundle, however they deal with a moving landing platform and feed individual tag pose detections into a Kalman filter. Our approach is to use a perspective- @math -point solution to obtain a single pose measurement using all tags at once.
{ "cite_N": [ "@cite_31", "@cite_47", "@cite_60", "@cite_46", "@cite_48", "@cite_29", "@cite_0", "@cite_57", "@cite_45", "@cite_15" ], "mid": [ "2962722232", "", "2565233142", "2546693991", "2949088561", "", "2286867507", "", "", "2125188427" ], "abstract": [ "Many unmanned aerial vehicle surveillance and monitoring applications require observations at precise locations over long periods of time, ideally days or weeks at a time (e.g. ecosystem monitoring), which has been impractical due to limited endurance and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous small rotorcraft UAS that is capable of performing repeated sorties for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy including emergency response to enable mission execution independently from human operators, and the ability of vision-based precision landing on a recharging station for automated energy replenishment. Experimental results of up to 11 hours of fully autonomous operation in indoor and outdoor environments illustrate the capability of our system.", "", "AprilTags and other passive fiducial markers require specialized algorithms to detect markers among other features in a natural scene. The vision processing steps generally dominate the computation time of a tag detection pipeline, so even small improvements in marker detection can translate to a faster tag detection system. We incorporated lessons learned from implementing and supporting the AprilTag system into this improved system. This work describes AprilTag 2, a completely redesigned tag detector that improves robustness and efficiency compared to the original AprilTag system. The tag coding scheme is unchanged, retaining the same robustness to false positives inherent to the coding system. The new detector improves performance with higher detection rates, fewer false positives, and lower computational time. Improved performance on small images allows the use of decimated input images, resulting in dramatic gains in detection speed.", "Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds.", "While autonomous multirotor micro aerial vehicles (MAVs) are uniquely well suited for certain types of missions benefiting from stationary flight capabilities, their more widespread usage still faces many hurdles, due in particular to their limited range and the difficulty of fully automating their deployment and retrieval. In this paper we address these issues by solving the problem of the automated landing of a quadcopter on a ground vehicle moving at relatively high speed. We present our system architecture, including the structure of our Kalman filter for the estimation of the relative position and velocity between the quadcopter and the landing pad, as well as our controller design for the full rendezvous and landing maneuvers. The system is experimentally validated by successfully landing in multiple trials a commercial quadcopter on the roof of a car moving at speeds of up to 50 km h.", "", "", "", "", "While the use of naturally-occurring features is a central focus of machine perception, artificial features (fiducials) play an important role in creating controllable experiments, ground truthing, and in simplifying the development of systems where perception is not the central objective. We describe a new visual fiducial system that uses a 2D bar code style “tag”, allowing full 6 DOF localization of features from a single image. Our system improves upon previous systems, incorporating a fast and robust line detection system, a stronger digital coding system, and greater robustness to occlusion, warping, and lens distortion. While similar in concept to the ARTag system, our method is fully open and the algorithms are documented in detail." ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
After obtaining raw tag pose estimates from the AprilTag detector, our approach uses a recursive least squares (RLS) filter to obtain a common tag bundle pose estimate. A relevant previous work on this topic is @cite_55 in which a particle filter is applied to raw tag detections and RLS is compared to RANSAC for bundle pose estimation. Because the demonstrated accuracy improvement is marginal (about 1 cm in mean position and negligible in mean attitude) and RANSAC has disadvantages like ad hoc threshold settings and a non-deterministic runtime, we prefer RLS for its simplicity. Nevertheless, RANSAC and its more advanced variations @cite_17 can be substituted into our implementation. Other work investigated fusing tag measurements in RGB space with a depth component @cite_25 with impressive gains in accuracy. One can imagine this approach benefiting landing accuracy at low altitudes, however downward-facing stereo camera mounting on drones raises several concerns like weight, vibration effects and space availability.
{ "cite_N": [ "@cite_55", "@cite_25", "@cite_17" ], "mid": [ "2549993168", "2771387334", "2242754697" ], "abstract": [ "Given the advancing importance for light-weight production materials an increase in automation is crucial. This paper presents a prototypical setup to obtain a precise pose estimation for an industrial manipulator in a realistic production environment. We show the achievable precision using only a standard fiducial marker system (AprilTag) and a state-of-the art camera attached to the robot. The results obtained in a typical working space of a robot cell of about 4.5m × 4.5m are in the range of 15mm to 35mm compared to ground truth provided by a laser tracker. We then show several methods of reducing this error by applying state-of-the-art optimization techniques, which reduce the error significantly to less than 10mm compared to the laser tracker ground truth data and at the same time remove e×isting outliers.", "Although there is an abundance of planar fiducial-marker systems proposed for augmented reality and computer-vision purposes, using them to estimate the pose accurately in robotic applications where collected data are noisy remains a challenge. This is inherently a difficult problem because these fiducial marker systems work solely within the RGB image space and the resolution of cameras on robots is often constrained. As a result, small noise in the image would cause the tag's detection process to produce large pose estimation errors. This paper describes an algorithm that improves the pose estimation accuracy of square fiducial markers in difficult scenes by fusing information from RGB and depth sensors. The algorithm retains the high detection rate and low false positive rate characteristics of fiducial systems while making them much more robust to size, lighting and sensory noise for pose estimation. The improvements make the fiducial tags suitable for robotic tasks requiring high pose accuracy in the real world environment.", "" ] }
1908.06381
2967107032
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
Our approach, however, is not limited to AprilTags, which can be substituted or combined with other markers for specific applications. A vast number of markers is available targeting different use-cases @cite_42 @cite_34 @cite_12 @cite_15 . Patterns include letters, circles, concentric circles and or polygons, letters, 2 dimensional barcodes and special patterns based on topological @cite_41 , detection range maximizing @cite_14 and blurring occlusion robustness considerations @cite_53 .
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_53", "@cite_42", "@cite_15", "@cite_34", "@cite_12" ], "mid": [ "2117064099", "2110321083", "2344366145", "2509603109", "2125188427", "1997522477", "2164139712" ], "abstract": [ "We describe the design and implementation of a fiducial marker system that encodes data in the frequency spectrum of a synthetic image. This distinctive approach to marker synthesis and data encoding allows for partial data extraction in adverse imaging conditions, and can significantly extend the detection range through graceful data degradation. Additional digital encoding and image construction techniques are used to increase the payload capacity, and also to store 3-D pose information in each fiducial marker. This fiducial marker scheme can be configured to match the needs of the target application. We present several experiments investigating the practical range of various parameters as well as the performance of a specific instance of the system.", "This paper describes reacTIVision: a camera based two dimensional fiducial (marker) tracking system developed for the reacTable*, a table based tangible musical instrument. Key features of reacTIVision include: (1) the ability to track a large number of fiducials with faster than real-time performance and (2) fiducial size may be varied depending on the number of distinct fiducial identities required. We describe recent advances in our implementation of topologybased fiducial recognition, including a generalised method for accurately computing fiducial location and orientation. A method of graph naming known as left heavy depth sequences is applied to the identification of topologically distinct fiducials. Also discussed is our approach to generating fiducial images for the system, in which we employ evolutionary computation to produce compact fiducials with specific geometric properties.", "Artificial markers are successfully adopted to solve several vision tasks, ranging from tracking to calibration. While most designs share the same working principles, many specialized approaches exist to address specific application domains. Some are specially crafted to boost pose recovery accuracy. Others are made robust to occlusion or easy to detect with minimal computational resources. The sheer amount of approaches available in recent literature is indeed a statement to the fact that no silver bullet exists. Furthermore, this is also a hint to the level of scholarly interest that still characterizes this research topic. With this paper we try to add a novel option to the offer, by introducing a general purpose fiducial marker which exhibits many useful properties while being easy to implement and fast to detect. The key ideas underlying our approach are three. The first one is to exploit the projective invariance of conics to jointly find the marker and set a reading frame for it. Moreover, the tag identity is assessed by a redundant cyclic coded sequence implemented using the same circular features used for detection. Finally, the specific design and feature organization of the marker are well suited for several practical tasks, ranging from camera calibration to information payload delivery.", "Nowadays, quadrotor is playing a growing number of critical roles in a great many of aspects. A crucial ability is the autonomous landing ability onto a target such as a fixed platform, moving platform, or ship deck platform, employing on-board vision. Researching autonomous landing technologies employing vision for quadrotor have grown up to be the second active area of research over the past years. This article primarily gives the foremost research groups involved in the development of on-board vision autonomous landing technologies for quadrotor, then it discusses the specifics of two factors are taken into account for a smooth landing: sensing and control. The aim of this article is to examine the state-of-the art autonomous landing technologies employing on-board vision that capture entire milestones and seminal works. Last but not least, the article stands out three challenging parts in this research field.", "While the use of naturally-occurring features is a central focus of machine perception, artificial features (fiducials) play an important role in creating controllable experiments, ground truthing, and in simplifying the development of systems where perception is not the central objective. We describe a new visual fiducial system that uses a 2D bar code style “tag”, allowing full 6 DOF localization of features from a single image. Our system improves upon previous systems, incorporating a fast and robust line detection system, a stronger digital coding system, and greater robustness to occlusion, warping, and lens distortion. While similar in concept to the ARTag system, our method is fully open and the algorithms are documented in detail.", "In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter \"H\" surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter \"H\". The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.", "Fiducial marker systems consist of patterns that are mounted in the environment and automatically detected in digital camera images using an accompanying detection algorithm. They are useful for augmented reality (AR), robot navigation, and general applications where the relative pose between a camera and object is required. Important parameters for such marker systems is their false detection rate (false positive rate), their inter-marker confusion rate, minimal detection size (in pixels) and immunity to lighting variation. ARTag is a marker system that uses digital coding theory to get a very low false positive and inter-marker confusion rate with a small required marker size, employing an edge linking method to give robust lighting variation immunity. ARTag markers are bi-tonal planar patterns containing a unique ID number encoded with robust digital techniques of checksums and forward error correction (FEC). This proposed new system, ARTag has very low and numerically quantifiable error rates, does not require a grey scale threshold as does other marker systems, and can encode up to 2002 different unique ID's with no need to store patterns. Experimental results are shown validating this system." ] }
1908.06267
2968913877
Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code and data are publicly available.
There are significant differences between @cite_51 and our work. First, our approach is Note that other GNNs used in inductive settings can be found @cite_10 @cite_52 . , not . Indeed, while the node classification approach of requires all test documents at training time, our graph classification model is able to perform inference on new, never-seen documents. The downside of representing documents as separate graphs, however, is that we lose the ability to capture corpus-level dependencies. Also, our directed graphs capture word ordering, which is ignored by . Finally, the approach of requires computing the PMI for every word pair in the vocabulary, which may be prohibitive on datasets with very large vocabularies. On the other hand, the complexity of MPAD does not depend on vocabulary size.
{ "cite_N": [ "@cite_10", "@cite_51", "@cite_52" ], "mid": [ "2962767366", "2891761261", "2766453196" ], "abstract": [ "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "Text classification is an important and classical problem in natural language processing. There have been a number of studies that applied convolutional neural networks (convolution on regular grid, e.g., sequence) to classification. However, only a limited number of studies have explored the more flexible graph convolutional neural networks (convolution on non-grid, e.g., arbitrary graph) for the task. In this work, we propose to use graph convolutional networks for text classification. We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text GCN is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents. Our experimental results on multiple benchmark datasets demonstrate that a vanilla Text GCN without any external word embeddings or knowledge outperforms state-of-the-art methods for text classification. On the other hand, Text GCN also learns predictive word and document embeddings. In addition, experimental results show that the improvement of Text GCN over state-of-the-art comparison methods become more prominent as we lower the percentage of training data, suggesting the robustness of Text GCN to less training data in text classification.", "We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training)." ] }
1908.06203
2967453594
External knowledge is often useful for natural language understanding tasks. We introduce a contextual text representation model called Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge into text representations. Unlike entity embedding methods, our approach encodes a knowledge graph into a context model. CC embeddings can be easily reused for a wide range of tasks just like pre-trained language models. Our model effectively encodes the huge UMLS database by leveraging semantic generalizability. Experiments on electronic health records (EHRs) and medical text processing benchmarks showed our model gives a major boost to the performance of supervised medical NLP tasks.
In recent years a number of KB embedding models have been proposed that aim at learning entity embeddings on a knowledge graph @cite_18 . Some models make use of textual information in KBs to improve entity embeddings, like using textual descriptions of entities as complement to triplet modeling @cite_30 @cite_5 , or jointly learning structure-based embeddings and description-based embeddings @cite_6 . The latter approach learns an encoder which is similar to our work, but the encoder is only used to encode entity descriptions. These approaches are mainly concerned with KB representations rather than text processing. Using text also allows for inductive and zero-shot @cite_9 entity representations, which is also a feature of our model.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_9", "@cite_6", "@cite_5" ], "mid": [ "2571811098", "2963224980", "2315403234", "2499696929", "" ], "abstract": [ "Learning the representations of a knowledge graph has attracted significant research interest in the field of intelligent Web. By regarding each relation as one translation from head entity to tail entity, translation-based methods including TransE, TransH and TransR are simple, effective and achieving the state-of-the-art performance. However, they still suffer the following issues: (i) low performance when modeling 1-to-N, N-to-1 and N-to-N relations. (ii) limited performance due to the structure sparseness of the knowledge graph. In this paper, we propose a novel knowledge graph representation learning method by taking advantage of the rich context information in a text corpus. The rich textual context information is incorporated to expand the semantic structure of the knowledge graph and each relation is enabled to own different representations for different head and tail entities to better handle 1-to-N, N-to-1 and N-to-N relations. Experiments on multiple benchmark datasets show that our proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods.", "Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximumly preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work addresses these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques, and application scenarios.", "We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models.", "Representation learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low-dimensional space. Most methods concentrate on learning representations with knowledge triples indicating relations between entities. In fact, in most knowledge graphs there are usually concise descriptions for entities, which cannot be well utilized by existing methods. In this paper, we propose a novel RL method for knowledge graphs taking advantages of entity descriptions. More specifically, we explore two encoders, including continuous bag-of-words and deep convolutional neural models to encode semantics of entity descriptions. We further learn knowledge representations with both triples and descriptions. We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. The source code of this paper can be obtained from https: github.com xrb92 DKRL.", "" ] }
1908.06203
2967453594
External knowledge is often useful for natural language understanding tasks. We introduce a contextual text representation model called Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge into text representations. Unlike entity embedding methods, our approach encodes a knowledge graph into a context model. CC embeddings can be easily reused for a wide range of tasks just like pre-trained language models. Our model effectively encodes the huge UMLS database by leveraging semantic generalizability. Experiments on electronic health records (EHRs) and medical text processing benchmarks showed our model gives a major boost to the performance of supervised medical NLP tasks.
Recently contextual text representation models like ELMo , BERT @cite_10 and OpenAI GPT @cite_8 @cite_28 have pushed the state-of-the-art results of various NLP tasks. Language modeling on a giant corpus learns powerful representations, which provides huge benefits to supervised tasks, especially where labeled data is scarce. These models use sequential or attention networks to generate word representations in context. In the biomedical domain, there is also BioBERT @cite_25 , a BERT model trained on PubMed abstracts and PubMed Central full-text articles, offering competitive results to state-of-the-art models on medical text processing tasks.
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_25", "@cite_8" ], "mid": [ "", "2896457183", "2911489562", "2250930514" ], "abstract": [ "", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in machine learning, extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, as deep learning models require a large amount of training data, applying deep learning to biomedical text mining is often unsuccessful due to the lack of training data in biomedical fields. Recent researches on training contextualized language representation models on text corpora shed light on the possibility of leveraging a large number of unannotated biomedical text corpora. We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain specific language representation model pre-trained on large-scale biomedical corpora. Based on the BERT architecture, BioBERT effectively transfers the knowledge from a large amount of biomedical texts to biomedical text mining models with minimal task-specific architecture modifications. While BERT shows competitive performances with previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.51 absolute improvement), biomedical relation extraction (3.49 absolute improvement), and biomedical question answering (9.61 absolute improvement). We make the pre-trained weights of BioBERT freely available at this https URL, and the source code for fine-tuning BioBERT available at this https URL.", "Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements." ] }
1908.05976
2967964179
Graph drawing and visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In such dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e. which nodes can directly and indirectly influence each other via a so-called causal path. While this causal topology is crucial to understand dynamical processes, the role of nodes, or cluster structures, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph drawing algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Having motivated the effects that are due to the arrow of time in dynamic graphs, we review related works on dynamic graph drawing. Using the taxonomy from @cite_35 , we categorize those works in (i) animation techniques that map the time dimension of dynamic graphs onto a time dimension of the resulting visualisation, and (ii) time-line representations that map the temporal evolution of dynamic graphs to a spatial dimension. We present methods only insofar as they are relevant to our work, while referring the reader to @cite_9 @cite_35 for a detailed review.
{ "cite_N": [ "@cite_35", "@cite_9" ], "mid": [ "2287322623", "2255903209" ], "abstract": [ "Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.", "Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between en- tities in readable, scalable, and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publica- tions. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. Finally, we identify and discuss challenges for future research." ] }
1908.05976
2967964179
Graph drawing and visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In such dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e. which nodes can directly and indirectly influence each other via a so-called causal path. While this causal topology is crucial to understand dynamical processes, the role of nodes, or cluster structures, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph drawing algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Apart from the issue that animations are cognitively demanding, additional challenges arise in data with high temporal resolution (e.g. seconds or even millisecond), where a single vertex or edge is likely to be active in each time stamp. The application of static graph drawing techniques to such data requires a coarse-graining of time into , such that each time slice gives rise to a graph snapshot that can be visualised using, e.g., force-directed layout algorithms. As pointed out in @cite_30 , this coarse-graining of time into time slices leads to a loss of information on causal paths and few dynamic graph drawing techniques have specifically addressed this issue.
{ "cite_N": [ "@cite_30" ], "mid": [ "2751016934" ], "abstract": [ "Timeslices are often used to draw and visualize dynamic graphs. While timeslices are a natural way to think about dynamic graphs, they are routinely imposed on continuous data. Often, it is unclear how many timeslices to select: too few timeslices can miss temporal features such as causality or even graph structure while too many timeslices slows the drawing computation. We present a model for dynamic graphs which is not based on timeslices, and a dynamic graph drawing algorithm, DynNoSlice, to draw graphs in this model. In our evaluation, we demonstrate the advantages of this approach over timeslicing on continuous data sets." ] }
1908.05976
2967964179
Graph drawing and visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In such dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e. which nodes can directly and indirectly influence each other via a so-called causal path. While this causal topology is crucial to understand dynamical processes, the role of nodes, or cluster structures, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph drawing algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Despite the advances outlined above, recognizing patterns in animation-based visualisations of dynamic graphs remains a considerable cognitive challenge for users. Moreover, it is difficult to embed dynamic graph animations into scholarly articles, books, or posters, which often limits their use in science and engineering to illustrative supplementary material. Addressing these issues, a second line of research focuses on methods to visualize dynamic graphs in terms of , which map the time dimension of dynamic graphs to a spatial dimension that can be embedded into a static visualisation. Examples includes widely-used directed acyclic of time-unfolded graph representations of dynamic graphs @cite_7 @cite_27 , or @cite_28 @cite_4 @cite_31 , sequences of layered adjacencies @cite_54 , stacked 3D representations where consecutive time slices are arranged along a third dimension @cite_2 . While recent works have proposed circular representations that scale to larger time series @cite_51 , the application of visualisations that map time to a spatial dimension is limited to a moderately large number of time stamps. Moreover, to the best of our knowledge, the effects of the chronological order of edges on the causal topology has not been considered in static visualisations of dynamic graphs.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_28", "@cite_54", "@cite_27", "@cite_2", "@cite_31", "@cite_51" ], "mid": [ "2100786833", "2086991377", "2099864932", "2075916997", "1982413833", "2467385561", "2559900691", "2009664208" ], "abstract": [ "The evolution of dependencies in information hierarchies can be modeled by sequences of compound digraphs with edge weights. In this paper we present a novel approach to visualize such sequences of graphs. It uses radial tree layout to draw the hierarchy, and circle sectors to represent the temporal change of edges in the digraphs. We have developed several interaction techniques that allow the users to explore the structural and temporal data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from very different application domains.", "Many network problems are based on fundamental relationships involving time. Consider, for example, the problems of modeling the flow of information through a distributed network, studying the spread of a disease through a population, or analyzing the reachability properties of an airline timetable. In such settings, a natural model is that of a graph in which each edge is annotated with a time label specifying the time at which its endpoints \"communicated.\" We will call such a graph a temporal network. To model the notion that information in such a network \"flows\" only on paths whose labels respect the ordering of time, we call a path time-respecting if the time labels on its edges are non-decreasing. The central motivation for our work is the following question: how do the basic combinatorial and algorithmic properties of graphs change when we impose this additional temporal condition? The notion of a path is intrinsic to many of the most fundamental algorithmic problems on graphs; spanning trees, connectivity, flows, and cuts are some examples. When we focus on time-respecting paths in place of arbitrary paths, many of these problems acquire a character that is different from the traditional setting, but very rich in its own right. We provide results on two types of problems for temporal networks. First, we consider connectivity problems, in which we seek disjoint time-respecting paths between pairs of nodes. The natural analogue of Menger's Theorem for node-disjoint paths fails in general for time-respecting paths; we give a non-trivial characterization of those graphs for which the theorem does hold in terms of an excluded subdivision theorem, and provide a polynomial-time algorithm for connectivity on this class of graphs. (The problem on general graphs is NP-complete.) We then define and study the class of inference problems, in which we seek to reconstruct a partially specified time labeling of a network in a manner consistent with an observed history of information flow.", "Compound digraphs are a widely used model in computer science. In many application domains these models evolve over time. Only few approaches to visualize such dynamic compound digraphs exist and mostly use animation to show the dynamics. In this paper we present a new visualization tool called TimeArcTrees that visualizes weighted, dynamic compound digraphs by drawing a sequence of node-link diagrams in a single view. Compactness is achieved by aligning the nodes of a graph vertically. Edge crossings are reduced by drawing upward and downward edges separately as colored arcs. Horizontal alignment of the instances of the same node in different graphs facilitates comparison of the graphs in the sequence. Many interaction techniques allow to explore the given graphs. Smooth animation supports the user to better track the transitions between views and to preserve his or her mental map. We illustrate the usefulness of the tool by looking at the particular problem of how shortest paths evolve over time. To this end, we applied the system to an evolving graph representing the German Autobahn and its traffic jams.", "We propose a novel radial layered matrix visualization for dynamic directed weighted graphs in which the vertices can also be hierarchically organized. Edges are represented as color-coded arcs within the radial diagram. Their positions are defined by polar coordinates instead of Cartesian coordinates as in traditional adjacency matrix representations: the angular position of an edge within an annulus is given by the angle bisector of the two related vertices, the radial position depends linearly on the angular distance between these vertices. The exploration of time-varying relational data is facilitated by aligning graph patterns radially. Furthermore, our approach incorporates several interaction techniques to explore dynamic patterns such as trends and countertrends. The usefulness is illustrated by two case studies analyzing large dynamic call graphs acquired from open source software projects.", "We study correlations in temporal networks and introduce the notion of betweenness preference. It allows to quantify to what extent paths, existing in time-aggregated representations of temporal networks, are actually realizable based on the sequence of interactions. We show that betweenness preference is present in empirical temporal network data and that it influences the length of shortest time-respecting paths. Using four different data sets, we further argue that neglecting betweenness preference leads to wrong conclusions about dynamical processes on temporal networks.", "In this paper we consider the problem of drawing and displaying a series of related graphs, i.e., graphs that share all, or parts of the same vertex set. We designed and implemented three different algorithms for simultaneous graph drawing and three different visualization schemes. The algorithms are based on a modification of the force-directed algorithm that allows us to take into account vertex weights and edge weights in order to achieve mental map preservation while obtaining individually readable drawings. The implementation is in Java and the system can be downloaded at http : sing. cs. arizona. edu .", "University of TrierKeywords: Graph visualization, node-link diagrams, radial layout, splattingAbstract: We describe and discuss a novel radial version of a scalable dynamic graph visualization. The radial layoutencodes dynamic directed graphs on narrow rings of a circle. The temporal evolution of the graph is mapped torings that grow outward from the center of the circle. Graph vertices are placed equidistantly at the borderlinesof each ring. Graph edges are displayed as curved lines starting from a source on the inner borderline of thering and pointing to a target on the outer borderline. To better perceive link directions and structures of largedatasets, visual clutter is reduced by exploiting an edge splatting approach that generates density fields of thedisplayed edges. The radial layout emphasizes newer graphs, displayed in the larger, outer parts of the circle.As a benefit, edge lengths are reduced in comparison to the non-radial visualization. Moreover, the radiallayout guarantees the symmetry of the visualization under shifting of vertices. We illustrate the usefulness ofthe diagrams by applying them to call graph data of the open source software project Cobertura.", "Networks are present in many fields such as finance, sociology, and transportation. Often these networks are dynamic: they have a structural as well as a temporal aspect. In addition to relations occurring over time, node information is frequently present such as hierarchical structure or time-series data. We present a technique that extends the Massive Sequence View ( msv) for the analysis of temporal and structural aspects of dynamic networks. Using features in the data as well as Gestalt principles in the visualization such as closure, proximity, and similarity, we developed node reordering strategies for the msv to make these features stand out that optionally take the hierarchical node structure into account. This enables users to find temporal properties such as trends, counter trends, periodicity, temporal shifts, and anomalies in the network as well as structural properties such as communities and stars. We introduce the circular msv that further reduces visual clutter. In addition, the (circular) msv is extended to also convey time-series data associated with the nodes. This enables users to analyze complex correlations between edge occurrence and node attribute changes. We show the effectiveness of the reordering methods on both synthetic and a rich real-world dynamic network data set." ] }
1908.05947
2968302391
Style is ubiquitous in our daily language uses, while what is language style to learning machines? In this paper, by exploiting the second-order statistics of semantic vectors of different corpora, we present a novel perspective on this question via style matrix, i.e. the covariance matrix of semantic vectors, and explain for the first time how Sequence-to-Sequence models encode style information innately in its semantic vectors. As an application, we devise a learning-free text style transfer algorithm, which explicitly constructs a pair of transfer operators from the style matrices for style transfer. Moreover, our algorithm is also observed to be flexible enough to transfer out-of-domain sentences. Extensive experimental evidence justifies the informativeness of style matrix and the competitive performance of our proposed style transfer algorithm with the state-of-the-art methods.
Similarly, image style transfer aims to reconstruct an image with some characters of the style image while preserving its content. The groundbreaking works proposed by @cite_0 @cite_18 show that the Gram matrices (or covariance matrices) of the feature maps, which are extracted by a frozen convolution neural network trained on classification task, are able to capture the visual style of an image. After that numerous works have been developed by matching the Gram matrices and @cite_16 theoretically proves that it's equivalent to minimize the Maximum Mean Discrepancy with the second order polynomial kernel. Now that @cite_18 synthesize the target image through iterative optimization which is inefficient and time-consuming, @cite_6 @cite_11 propose to train a feed-forward network per style and significantly speed up the image reconstruction. And @cite_2 @cite_14 further improve the flexibility and efficiency by incorporating multiple styles with only one network. In order to reconstruct images to arbitrary styles(unseen at the training stage) with a single network forward pass, @cite_1 proposes whitening and coloring transformations to directly match the feature covariance to the style image at intermediate layers of a pre-trained auto-encoder network.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_16", "@cite_11" ], "mid": [ "2475287302", "", "", "2572730214", "2964193438", "2604737827", "2952008036", "2331128040" ], "abstract": [ "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "", "", "The recent work of , who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.", "Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.", "We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.", "Neural Style Transfer has recently demonstrated very exciting results which catches eyes in both academia and industry. Despite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this paper, we propose a novel interpretation of neural style transfer by treating it as a domain adaptation problem. Specifically, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neural style transfer is to match the feature distributions between the style images and the generated images. To further support our standpoint, we experiment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two important research fields, and could enlighten future researches.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results." ] }
1908.05750
2968998393
3D Human Motion Indexing and Retrieval is an interesting problem due to the rise of several data-driven applications aimed at analyzing and or re-utilizing 3D human skelet al data, such as data-driven animation, analysis of sports bio-mechanics, human surveillance etc. Spatio-temporal articulations of humans, noisy missing data, different speeds of the same motion etc. make it challenging and several of the existing state of the art methods use hand-craft features along with optimization based or histogram based comparison in order to perform retrieval. Further, they demonstrate it only for very small datasets and few classes. We make a case for using a learned representation that should recognize the motion as well as enforce a discriminative ranking. To that end, we propose, a 3D human motion descriptor learned using a deep network. Our learned embedding is generalizable and applicable to real-world data - addressing the aforementioned challenges and further enables sub-motion searching in its embedding space using another network. Our model exploits the inter-class similarity using trajectory cues, and performs far superior in a self-supervised setting. State of the art results on all these fronts is shown on two large scale 3D human motion datasets - NTU RGB+D and HDM05.
Most approaches on the 3D human motion retrieval have focused on developing hand crafted features to represent the skeleton sequences @cite_17 @cite_19 @cite_4 . In this section, we broadly categorize them by the method in which they engineer their descriptors.
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_17" ], "mid": [ "1984776140", "2110819057", "2059021619" ], "abstract": [ "Human motion retrieval plays an important role in many motion data based applications. In the past, many researchers tended to use a single type of visual feature as data representation. Because different visual feature describes different aspects about motion data, and they have dissimilar discriminative power with respect to one particular class of human motion, it led to poor retrieval performance. Thus, it would be beneficial to combine multiple visual features together for motion data representation. In this article, we present an Adaptive Multi-view Feature Selection (AMFS) method for human motion retrieval. Specifically, we first use a local linear regression model to automatically learn multiple view-based Laplacian graphs for preserving the local geometric structure of motion data. Then, these graphs are combined together with a non-negative view-weight vector to exploit the complementary information between different features. Finally, in order to discard the redundant and irrelevant feature components from the original high-dimensional feature representation, we formulate the objective function of AMFS as a general trace ratio optimization problem, and design an effective algorithm to solve the corresponding optimization problem. Extensive experiments on two public human motion database, i.e., HDM05 and MSR Action3D, demonstrate the effectiveness of the proposed AMFS over the state-of-art methods for motion data retrieval. The scalability with large motion dataset, and insensitivity with the algorithm parameters, make our method can be widely used in real-world applications. Display Omitted An Adaptive Multi-view Feature Selection (AMFS) algorithm is proposed to fuse multiple features formotion data retrieval.The local regression model isused to learn a datum-adaptive graph for each feature to preserve local structure information.The selection matrix is learnt from all local graphs by exploiting the complementary information between different features.An efficient iterative optimization approach is designed to solve objective function represented in trace ratio form.", "Human action recognition is an important yet challenging task. Human actions usually involve human-object interactions, highly articulated motions, high intra-class variations, and complicated temporal structures. The recently developed commodity depth sensors open up new possibilities of dealing with this problem by providing 3D depth data of the scene. This information not only facilitates a rather powerful human motion capturing technique, but also makes it possible to efficiently model human-object interactions and intra-class variations. In this paper, we propose to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints. The proposed model is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions. We evaluate the proposed approach on three challenging action recognition datasets captured by Kinect devices, a multiview action recognition dataset captured with Kinect device, and a dataset captured by a motion capture system. The experimental evaluations show that the proposed approach achieves superior performance to the state-of-the-art algorithms.", "In this paper, we propose a content-based motion retrieval algorithm. In this work, firstly, each motion is represented by a set of sequence frames. Representative frames are first selected from the motions and the corresponding weights are provided. Secondly, the graph model is built with these selected frames. For searching the optimal measurement between query motion and relevant motions in database, an object function is built. The task to find the maximal a posterior (MAP) in the motion level is equivalent to find the minimal objective function value. At last, based on probability calculation, the KM (Kuhn-Munkres) algorithm is used to find the optimal matching between motions. The matching result is used to measure the similarity between two motions. Experimental results and comparison with existing methods show the effectiveness of the proposed algorithm." ] }
1908.05750
2968998393
3D Human Motion Indexing and Retrieval is an interesting problem due to the rise of several data-driven applications aimed at analyzing and or re-utilizing 3D human skelet al data, such as data-driven animation, analysis of sports bio-mechanics, human surveillance etc. Spatio-temporal articulations of humans, noisy missing data, different speeds of the same motion etc. make it challenging and several of the existing state of the art methods use hand-craft features along with optimization based or histogram based comparison in order to perform retrieval. Further, they demonstrate it only for very small datasets and few classes. We make a case for using a learned representation that should recognize the motion as well as enforce a discriminative ranking. To that end, we propose, a 3D human motion descriptor learned using a deep network. Our learned embedding is generalizable and applicable to real-world data - addressing the aforementioned challenges and further enables sub-motion searching in its embedding space using another network. Our model exploits the inter-class similarity using trajectory cues, and performs far superior in a self-supervised setting. State of the art results on all these fronts is shown on two large scale 3D human motion datasets - NTU RGB+D and HDM05.
For the task of retrieval, @cite_2 proposed a simple auto-encoder that captures high-level features. However, their model doesn't explicitly use a temporal construct for motion data. Primarily, learnable representations from 3D motion data have been used for other tasks. @cite_18 @cite_14 are a few amongst many who used deep learning models for 3D motion recognition. Similarly, @cite_9 adopts a unidirectional LSTM to encode the skeleton frames within the hidden network states and learn what subsequences of encoded frames belong to the specified action classes.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_14", "@cite_2" ], "mid": [ "2950007497", "2963369114", "", "2294696353" ], "abstract": [ "Motion capture data digitally represent human movements by sequences of 3D skeleton configurations. Such spatio-temporal data, often recorded in the stream-based nature, need to be efficiently processed to detect high-interest actions, for example, in human-computer interaction to understand hand gestures in real time. Alternatively, automatically annotated parts of a continuous stream can be persistently stored to become searchable, and thus reusable for future retrieval or pattern mining. In this paper, we focus on multi-label detection of user-specified actions in unsegmented sequences as well as continuous streams. In particular, we utilize the current advances in recurrent neural networks and adopt a unidirectional LSTM model to effectively encode the skeleton frames within the hidden network states. The model learns what subsequences of encoded frames belong to the specified action classes within the training phase. The learned representations of classes are then employed within the annotation phase to infer the probability that an incoming skeleton frame belongs to a given action class. The computed probabilities are finally compared against a learned threshold to automatically determine the beginnings and endings of actions. To further enhance the annotation accuracy, we utilize a bidirectional LSTM model to estimate class probabilities by considering not only the past frames but also the future ones. We extensively evaluate both the models on the three use cases of real-time stream annotation, offline annotation of long sequences, and early action detection and prediction. The experiments demonstrate that our models outperform the state of the art in effectiveness and are at least one order of magnitude more efficient, being able to annotate 10 k frames per second.", "Skeleton-based human action recognition has recently drawn increasing attentions with the availability of large-scale skeleton datasets. The most crucial factors for this task lie in two aspects: the intra-frame representation for joint co-occurrences and the inter-frame representation for skeletons' temporal evolutions. In this paper we propose an end-to-end convolutional co-occurrence feature learning framework. The co-occurrence features are learned with a hierarchical methodology, in which different levels of contextual information are aggregated gradually. Firstly point-level information of each joint is encoded independently. Then they are assembled into semantic representation in both spatial and temporal domains. Specifically, we introduce a global spatial aggregation scheme, which is able to learn superior joint co-occurrence features over local aggregation. Besides, raw skeleton coordinates as well as their temporal difference are integrated with a two-stream paradigm. Experiments show that our approach consistently outperforms other state-of-the-arts on action recognition and detection benchmarks like NTU RGB+D, SBU Kinect Interaction and PKU-MMD.", "", "Data-driven motion research requires effective tools to compress, index, retrieve and reconstruct captured motion data. In this paper, we present a novel method to perform these tasks using a deep learning architecture. Our deep autoencoder, a form of artificial neural network, encodes motion segments into \"deep signatures\". This signature is formed by concatenating signatures for functionally different parts of the body. The deep signature is a highly condensed representation of a motion segment, requiring only 20 bytes, yet still encoding high level motion features. It can be used to produce a very compact representation of a motion database that can be effectively used for motion indexing and retrieval, with a very small memory footprint. Database searches are reduced to low cost binary comparisons of signatures. Motion reconstruction is achieved by fixing a \"deep signature\" that is missing a section using Gibbs Sampling. We tested both manually and automatically segmented motion databases and our experiments show that extracting the deep signature is fast and scales well with large databases. Given a query motion, similar motion segments can be retrieved at interactive speed with excellent match quality." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
Fair allocation of indivisible goods has received considerable attention from the research community, especially in the last few years. We refer to surveys by @cite_7 , @cite_10 , and @cite_14 for an overview of recent developments in the area.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_7" ], "mid": [ "2944952006", "2002055996", "2504160229" ], "abstract": [ "Fair division, a key concern in the design of many social institutions, has for 70 years been the subject of interdisciplinary research at the interface of mathematics, economics, and game theory. ...", "Efficient algorithms are presented for partitioning a graph into connected components, biconnected components and simple paths. The algorithm for partitioning of a graph into simple paths of iterative and each iteration produces a new path between two vertices already on paths. (The start vertex can be specified dynamically.) If V is the number of vertices and E is the number of edges, each algorithm requires time and space proportional to max ( V, E ) when executed on a random access computer.", "" ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
The papers most closely related to ours are the two papers that we mentioned, by @cite_12 and @cite_5 . showed that for any number of agents with additive valuations, there always exists an allocation that gives every agent her maximin share when the graph is a tree, but not necessarily when the graph is a cycle. It is important to note that their maximin share notion corresponds to our G-MMS notion and is defined based on the graph, with only connected allocations with respect to that graph taken into account in an agent's calculation. As an example of a consequence, even though a cycle permits strictly more connected allocations than a path, it offers less guarantee in terms of the G-MMS. Our approach of considering the (complete-graph) MMS allows us to directly compare the guarantees that can be obtained for different graphs.
{ "cite_N": [ "@cite_5", "@cite_12" ], "mid": [ "2889292676", "1577069963" ], "abstract": [ "We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs @math that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can be check in linear time whether a graph belongs to this class, and if so return an EF1 allocation.", "The concept of fair division is as old as civil society itself. Aristotle's \"equal treatment of equals\" was the first step toward a formal definition of distributive fairness. The concept of collective welfare, more than two centuries old, is a pillar of modern economic analysis. Reflecting fifty years of research, this book examines the contribution of modern microeconomic thinking to distributive justice. Taking the modern axiomatic approach, it compares normative arguments of distributive justice and their relation to efficiency and collective welfare. The book begins with the epistemological status of the axiomatic approach and the four classic principles of distributive justice: compensation, reward, exogenous rights, and fitness. It then presents the simple ideas of equal gains, equal losses, and proportional gains and losses. The book discusses three cardinal interpretations of collective welfare: Bentham's \"utilitarian\" proposal to maximize the sum of individual utilities, the Nash product, and the egalitarian leximin ordering. It also discusses the two main ordinal definitions of collective welfare: the majority relation and the Borda scoring method. The Shapley value is the single most important contribution of game theory to distributive justice. A formula to divide jointly produced costs or benefits fairly, it is especially useful when the pattern of externalities renders useless the simple ideas of equality and proportionality. The book ends with two versatile methods for dividing commodities efficiently and fairly when only ordinal preferences matter: competitive equilibrium with equal incomes and egalitarian equivalence. The book contains a wealth of empirical examples and exercises." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
@cite_5 investigated the same model with respect to relaxations of envy-freeness. As we mentioned, they characterized the set of graphs for which EF1 can be guaranteed in the case of two agents with arbitrary monotonic valuations. Moreover, they showed that an EF1 allocation always exists on a path for @math . Intriguingly, the existence question for @math remains open, although they showed that an EF2 allocation can be guaranteed for any @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "2889292676" ], "abstract": [ "We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs @math that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can be check in linear time whether a graph belongs to this class, and if so return an EF1 allocation." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
Besides @cite_12 and @cite_5 , a number of other authors have recently studied fairness under connectivity constraints. @cite_4 investigated maximin share fairness in the case of cycles, also concentrating on the G-MMS notion, while @cite_9 focused on paths and provided approximations of envy-freeness, proportionality, as well as another fairness notion called equitability. @cite_20 considered fairness in conjunction with the economic efficiency notion of Pareto optimality. @cite_15 studied the problem of , where all items yield disutility to the agents, and gave complexity results on deciding the existence of envy-free, proportional, and equitable allocations for paths and stars.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_5", "@cite_15", "@cite_12", "@cite_20" ], "mid": [ "2803597015", "2917587507", "2889292676", "2949883232", "1577069963", "2900278988" ], "abstract": [ "Abstract The maximin share guarantee is, in the context of allocating indivisible goods to a set of agents, a recent fairness criterion. A solution achieving a constant approximation of this guarantee always exists and can be computed in polynomial time. We extend the problem to the case where the goods collectively received by the agents satisfy a matroidal constraint. Polynomial approximation algorithms for this generalization are provided: a 1 2-approximation for any number of agents, a ( 1 − e ) -approximation for two agents, and a ( 8 9 − e ) -approximation for three agents. Apart from the extension to matroids, the ( 8 9 − e ) -approximation for three agents improves on a ( 7 8 − e ) -approximation by (ICALP 2015). Some special cases are also presented and some extensions of the model are discussed.", "Abstract In this paper, we study the classic problem of fairly allocating indivisible items with the extra feature that the items lie on a line. Our goal is to find a fair allocation that is contiguous, meaning that the bundle of each agent forms a contiguous block on the line. While allocations satisfying the classical fairness notions of proportionality, envy-freeness, and equitability are not guaranteed to exist even without the contiguity requirement, we show the existence of contiguous allocations satisfying approximate versions of these notions that do not degrade as the number of agents or items increases. We also study the efficiency loss of contiguous allocations due to fairness constraints.", "We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs @math that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can be check in linear time whether a graph belongs to this class, and if so return an EF1 allocation.", "The paper considers fair allocation of indivisible nondisposable items that generate disutility (chores). We assume that these items are placed in the vertices of a graph and each agent’s share has to form a connected subgraph of this graph. Although a similar model has been investigated before for goods, we show that the goods and chores settings are inherently different. In particular, it is impossible to derive the solution of the chores instance from the solution of its naturally associated fair division instance. We consider three common fair division solution concepts, namely proportionality, envy-freeness and equitability, and two individual disutility aggregation functions: additive and maximum based. We show that deciding the existence of a fair allocation is hard even if the underlying graph is a path or a star. We also present some efficiently solvable special cases for these graph topologies.", "The concept of fair division is as old as civil society itself. Aristotle's \"equal treatment of equals\" was the first step toward a formal definition of distributive fairness. The concept of collective welfare, more than two centuries old, is a pillar of modern economic analysis. Reflecting fifty years of research, this book examines the contribution of modern microeconomic thinking to distributive justice. Taking the modern axiomatic approach, it compares normative arguments of distributive justice and their relation to efficiency and collective welfare. The book begins with the epistemological status of the axiomatic approach and the four classic principles of distributive justice: compensation, reward, exogenous rights, and fitness. It then presents the simple ideas of equal gains, equal losses, and proportional gains and losses. The book discusses three cardinal interpretations of collective welfare: Bentham's \"utilitarian\" proposal to maximize the sum of individual utilities, the Nash product, and the egalitarian leximin ordering. It also discusses the two main ordinal definitions of collective welfare: the majority relation and the Borda scoring method. The Shapley value is the single most important contribution of game theory to distributive justice. A formula to divide jointly produced costs or benefits fairly, it is especially useful when the pattern of externalities renders useless the simple ideas of equality and proportionality. The book ends with two versatile methods for dividing commodities efficiently and fairly when only ordinal preferences matter: competitive equilibrium with equal incomes and egalitarian equivalence. The book contains a wealth of empirical examples and exercises.", "We study the problem of allocating indivisible items to agents with additive valuations, under the additional constraint that bundles must be connected in an underlying item graph. Previous work has considered the existence and complexity of fair allocations. We study the problem of finding an allocation that is Pareto-optimal. While it is easy to find an efficient allocation when the underlying graph is a path or a star, the problem is NP-hard for many other graph topologies, even for trees of bounded pathwidth or of maximum degree 3. We show that on a path, there are instances where no Pareto-optimal allocation satisfies envy-freeness up to one good, and that it is NP-hard to decide whether such an allocation exists, even for binary valuations. We also show that, for a path, it is NP-hard to find a Pareto-optimal allocation that satisfies maximin share, but show that a moving-knife algorithm can find such an allocation when agents have binary valuations that have a non-nested interval structure." ] }
1908.05433
2967651842
We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.
A related line of work also combines graphs with resource allocation, but uses graphs to capture the connection between agents instead of goods. In particular, a graph specifies the acquaintance relationship among agents. @cite_1 and @cite_2 defined graph-based versions of envy-freeness and proportionality with divisible resources where agents only evaluate their shares relative to other agents with whom they are acquainted. @cite_21 and @cite_23 studied the graph-based version of envy-freeness with indivisible goods. @cite_13 introduced a number of fairness notions parameterized by the acquaintance graph.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_23", "@cite_2", "@cite_13" ], "mid": [ "2887778757", "2555059222", "2808072441", "2962830613", "2788938373" ], "abstract": [ "We study the fair division problem consisting in allocating one item per agent so as to avoid (or minimize) envy, in a setting where only agents connected in a given social network may experience envy. In a variant of the problem, agents themselves can be located on the network by the central authority. These problems turn out to be difficult even on very simple graph structures, but we identify several tractable cases. We further provide practical algorithms and experimental insights.", "We study cake cutting on a graph, where agents can only evaluate their shares relative to their neighbors. This is an extension of the classical problem of fair division to incorporate the notion of social comparison from the social sciences. We say an allocation is locally envy-free if no agent envies a neighbor's allocation, and locally proportional if each agent values its own allocation as much as the average value of its neighbors' allocations. We generalize the classical Cut and Choose\" protocol for two agents to this setting, by fully characterizing the set of graphs for which an oblivious single-cutter protocol can give locally envy-free (thus also locally-proportional) allocations. We study the price of envy-freeness , which compares the total value of an optimal allocation with that of an optimal, locally envy-free allocation. Surprisingly, a lower bound of @math on the price of envy-freeness for global allocations also holds for local envy-freeness in any connected graph, so sparse graphs do not provide more flexibility asymptotically with respect to the quality of envy-free allocations.", "Finding an envy-free allocation of indivisible resources to agents is a central task in many multiagent systems. Often, non-trivial envy-free allocations do not exist, and finding them can be a computationally hard task. Classic envy-freeness requires that every agent likes the resources allocated to it at least as much as the resources allocated to any other agent. In many situations this assumption can be relaxed since agents often do not even know each other. We enrich the envy-freeness concept by taking into account (directed) social networks of the agents. Thus, we require that every agent likes its own allocation at least as much as those of all its (out)neighbors. This leads to a \"more local'' concept of envy-freeness. We also consider a strong variant where every agent must like its own allocation more than those of all its (out)neighbors. We analyze the classic and the parameterized complexity of finding allocations that are envy-free with respect to one of the variants of our new concept, and that either are complete, are Pareto-efficient, or optimize the utilitarian social welfare. To this end, we study different restrictions of the agents' preferences and of the social network structure. We identify cases that become easier (from Σ^ _2 @math @math @math @math $-hard) when comparing classic envy-freeness with our graph-based envy-freeness. Furthermore, we spot cases where graph envy-freeness is easier to decide than strong graph envy-freeness, and vice versa.", "", "In the context of fair allocation of indivisible items, fairness concepts often compare the satisfaction of an agent to the satisfaction she would have from items that are not allocated to her: in particular, envy-freeness requires that no agent prefers the share of someone else to her own share. We argue that these notions could also be defined relative to the knowledge that an agent has on how the items that she does not receive are distributed among other agents. We define a family of epistemic notions of envy-freeness, parameterized by a social graph, where an agent observes the share of her neighbours but not of her non-neighbours. We also define an intermediate notion between envy-freeness and proportionality, also parameterized by a social graph. These weaker notions of envy-freeness are useful when seeking a fair allocation, since envy-freeness is often too strong. We position these notions with respect to known ones, thus revealing new rich hierarchies of fairness concepts. Finally, we present a very general framework that covers all the existing and many new fairness concepts." ] }
1908.05552
2968730564
Musculoskelet al robots that are based on pneumatic actuation have a variety of properties, such as compliance and back-drivability, that render them particularly appealing for human-robot collaboration. However, programming interactive and responsive behaviors for such systems is extremely challenging due to the nonlinearity and uncertainty inherent to their control. In this paper, we propose an approach for learning Bayesian Interaction Primitives for musculoskelet al robots given a limited set of example demonstrations. We show that this approach is capable of real-time state estimation and response generation for interaction with a robot for which no analytical model exists. Human-robot interaction experiments on a 'handshake' task show that the approach generalizes to new positions, interaction partners, and movement velocities.
Robots with pneumatic artificial muscles (PAMs) and compliant limbs have been shown to be desirable for human-robot interaction scenarios @cite_22 @cite_13 . When configured in an anthropomorphic musculoskelet al structure, such robots provide an intriguing platform for human-robot interaction (HRI) @cite_4 due to their potential to generate human-like motions while offering a degree of safety as a result of their compliance when confronted with an external force -- such as contact with a human. Recent work @cite_16 has shown the value of utilizing McKibben actuators in the design of these robots, due to their inherent compliance and inexpensive material cost. However, while analytical kinematics models are in theory possible @cite_20 @cite_3 , they are not always practical due to the effects of friction and the deterioration of mechanical elements, which are difficult to account for (although some gains have been made in this area @cite_15 ). Subsequently, this work proposes using a method based on learning from demonstration @cite_7 @cite_21 , which is a well-established methodology for teaching robots complex motor skills based on observed data.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_21", "@cite_3", "@cite_15", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "", "2145098376", "1684361744", "", "", "2127963617", "2869878434", "", "2131912792" ], "abstract": [ "", "The application of a robot to rehabilitation has become a matter of great concern. This paper deals with functional recovery therapy, one important aspect of physical rehabilitation. Single-joint therapy machines have already been achieved. However, for more efficient therapy, multjoint robots are necessary to achieve more realistic motion patterns. This kind of robot must have a high level of safety for humans. A pneumatic actuator may be available for such a robot, because of the compliance of compressed air. A pneumatic rubber artificial muscle manipulator has been applied to construct a therapy robot with two degrees of freedom (DOF). Also, an impedance control strategy is employed to realize various motion modes for the physical therapy modes. Further, for efficient rehabilitation, it is desirable to comprehend the physical condition of the patient. Thus, the mechanical impedance of the human arm is used as an objective evaluation of recovery, and an estimation method is proposed. Experiments show the suitability of the proposed rehabilitation robot system.", "Also referred to as learning by imitation, tutelage, or apprenticeship learning, Programming by Demonstration (PbD) develops methods by which new skills can be transmitted to a robot. This book examines methods by which robots learn new skills through human guidance. Taking a practical perspective, it covers a broad range of applications, including service robots. The text addresses the challenges involved in investigating methods by which PbD is used to provide robots with a generic and adaptive model of control. Drawing on findings from robot control, human-robot interaction, applied machine learning, artificial intelligence, and developmental and cognitive psychology, the book contains a large set of didactic and illustrative examples. Practical and comprehensive machine learning source codes are available on the books companion website: http: www.programming-by-demonstration.org", "", "", "Pneumatic muscles are interesting in their use as actuators in robotics, since they have a high power weight ratio, a high-tension force and a long durability. This paper presents a two-axis planar articulated robot, which is driven by four pneumatic muscles. Every actuator is supplied by one electronic servo valve in 3 3-way function. Part of this work is the derivation of the model description, which describes a high nonlinear dynamic behavior of the robot. Main focus is the physical model for the pneumatic muscle and a detailed model description for the servo valves. The aim is to control the tool center point (TCP) of the manipulator, which bases here on a fast subsidiary torque regulator of the drive system compensating the nonlinear effects. As the robot represents a MIMO system, a second control objective is defined, which corresponds here to the average pressure of each muscle-pair. An optimisation-strategy is presented to meet the maximum stiffness of the controlled drive system. As the torque controller assures a fast linear input output behavior, a standardized controller is implemented which bases here on the Computed Torque Method to track the TCP. Measurement results show the efficiency of the presented cascaded control concept.", "ABSTRACTThis paper describes the construction and design decisions of an anthropomorphic musculoskelet al robot arm actuated by pneumatic artificial muscles. This robot was designed to allow human-inspired compliant movements without the need to replicate the human body-structure in detail. This resulted in an mechanically simple design while preserving the motoric characteristics of a human. Besides the constructional details of the robot we will present two experiments to show the robots abilities regarding to its dexterity and compliance.", "", "This paper reports mechanical testing and modeling results for the McKibben artificial muscle pneumatic actuator. This device first developed in the 1950's, contains an expanding tube surrounded by braided cords. The authors report static and dynamic length-tension testing results and derive a linearized model of these properties for three different models. The results are briefly compared with human muscle properties to evaluate the suitability of McKibben actuators for human muscle emulation in biologically based robot arms. >" ] }
1908.05441
2968447920
Prior work has demonstrated that question classification (QC), recognizing the problem domain of a question, can help answer it more accurately. However, developing strong QC algorithms has been hindered by the limited size and complexity of annotated data available. To address this, we present the largest challenge dataset for QC, containing 7,787 science exam questions paired with detailed classification labels from a fine-grained hierarchical taxonomy of 406 problem domains. We then show that a BERT-based model trained on this dataset achieves a large (+0.12 MAP) gain compared with previous methods, while also achieving state-of-the-art performance on benchmark open-domain and biomedical QC datasets. Finally, we show that using this model's predictions of question topic significantly improves the accuracy of a question answering system by +1.7 P@1, with substantial future gains possible as QC performance improves.
Question classification typically makes use of a combination of syntactic, semantic, surface, and embedding methods. Syntactic patterns @cite_25 @cite_37 @cite_28 @cite_7 and syntactic dependencies @cite_2 have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity @cite_10 @cite_34 @cite_19 . Keyword identification helps identify specific terms useful for classification @cite_39 @cite_2 @cite_18 . Similarly, named entity recognizers @cite_5 @cite_32 or lists of semantically related words @cite_5 @cite_19 can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings @cite_17 @cite_35 . Here, we empirically demonstrate many of these existing methods do not transfer to the science domain.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_18", "@cite_7", "@cite_28", "@cite_32", "@cite_39", "@cite_19", "@cite_2", "@cite_5", "@cite_34", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2912090347", "", "2741412196", "2320227961", "2157193293", "2250249423", "2062935223", "2911966030", "", "", "", "2019307700", "2149048832", "1832693441" ], "abstract": [ "Sentence classification, which is the foundation of the subsequent text-based processing, plays an important role in the intelligent question answering (IQA). Convolutional neural networks (CNN) as a kind of common architecture of deep learning, has been widely used to the sentence classification and achieved excellent performance in open field. However, the class imbalance problems and fuzzy sentence feature problem are common in IQA. With the aim to get better performance in IQA, this paper proposes a simple and effective method by increasing generalization and the diversity of sentence features based on simple CNN. In proposed method, the professional entities could be replaced by placeholders to improve the performance of generalization. And CNN reads sentence vectors from both forward and reverse directions to increase the diversity of sentence features. The testing results show that our methods can achieve better performance than many other complex CNN models. In addition, we apply our method in practice of IQA, and the results show the method is effective.", "", "", "Question classification is very important for question answering. This paper present our research work on question classification through machine learning approach. In order to train th e learning model, we designed a rich set of features that are predictive of question categories. An important component of question answering systems is question classification. The task of question classification is to predict the entity type of the answer of a natural language question. Question classification is typically done using machine learning techniques. Different lexical, syntactical and semantic features can be extracted from a question. In this work we combined lexical, syntactic and semantic f eatures which improve the accuracy of classification. Furthermore, we adopted three different classifiers: Nearest Neighbors (NN), Naive Bayes (NB), and Support Vector Machines (SVM) using two kinds of features: bag -of-words and bag-of n grams. Furthermore, we discovered that when we take SVM classifier and combine the semantic, syntactic, lexical feature we found that it will improve the accuracy of classification. We tested our proposed approaches on the well-known UIUC dataset and succeeded to achieve anew record on the accuracy of classification on this dataset.", "Objective: Many studies have been completed on question classification in the open domain, however only limited work focuses on the medical domain. As well, to the best of our knowledge, most of these medical question classifications were designed for literature based question and answering systems. This paper focuses on a new direction, which is to design a novel question processing and classification model for answering clinical questions applied to electronic patient notes. Methods: There are four main steps in the work. Firstly, a relatively large set of clinical questions was collected from staff in an Intensive Care Unit. Then, a clinical question taxonomy was designed for question and answering purposes. Subsequently an annotation guideline was created and used to annotate the question set. Finally, a multilayer classification model was built to classify the clinical questions. Results: Through the initial classification experiments, we realized that the general features cannot contribute to high performance of a minimum classifier (a small data set with multiple classes). Thus, an automatic knowledge discovery and knowledge reuse process was designed to boost the performance by extracting and expanding the specific features of the questions. In the evaluation, the results show around 90 accuracy can be achieved in the answerable subclass classification and generic question templates classification. On the other hand, the machine learning method does not perform well at identifying the category of unanswerable questions, due to the asymmetric distribution. Conclusions: In this paper, a comprehensive study on clinical questions has been completed. A major outcome of this work is the multilayer classification model. It serves as a major component of a patient records based clinical question and answering system as our studies continue. As well, the question collections can be reused by the research community to improve the efficiency of their own question and answering systems.", "We propose a robust answer reranking model for non-factoid questions that integrates lexical semantics with discourse information, driven by two representations of discourse: a shallow representation centered around discourse markers, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We experimentally demonstrate that the discourse structure of nonfactoid answers provides information that is complementary to lexical semantic similarity between question and answer, improving performance up to 24 (relative) over a state-of-the-art model that exploits lexical semantic similarity alone. We further demonstrate excellent domain transfer of discourse information, suggesting these discourse features have general utility to non-factoid question answering.", "Objective: Both healthcare professionals and healthcare consumers have information needs that can be met through the use of computers, specifically via medical question answering systems. However, the information needs of both groups are different in terms of literacy levels and technical expertise, and an effective question answering system must be able to account for these differences if it is to formulate the most relevant responses for users from each group. In this paper, we propose that a first step toward answering the queries of different users is automatically classifying questions according to whether they were asked by healthcare professionals or consumers. Design: We obtained two sets of consumer questions ( 10,000 questions in total) from Yahoo answers. The professional questions consist of two question collections: 4654 point-of-care questions (denoted as PointCare) obtained from interviews of a group of family doctors following patient visits and 5378 questions from physician practices through professional online services (denoted as OnlinePractice). With more than 20,000 questions combined, we developed supervised machine-learning models for automatic classification between consumer questions and professional questions. To evaluate the robustness of our models, we tested the model that was trained on the Consumer-PointCare dataset on the Consumer-OnlinePractice dataset. We evaluated both linguistic features and statistical features and examined how the characteristics in two different types of professional questions (PointCare vs. OnlinePractice) may affect the classification performance. We explored information gain for feature reduction and the back-off linguistic category features. Results: The 10-fold cross-validation results showed the best F1-measure of 0.936 and 0.946 on Consumer-PointCare and Consumer-OnlinePractice respectively, and the best F1-measure of 0.891 when testing the Consumer-PointCare model on the Consumer-OnlinePractice dataset. Conclusion: Healthcare consumer questions posted at Yahoo online communities can be reliably classified from professional questions posted by point-of-care clinicians and online physicians. The supervised machine-learning models are robust for this task. Our study will significantly benefit further development in automated consumer question answering.", "Prior background knowledge is essential for human reading and understanding. In this work, we investigate how to leverage external knowledge to improve question answering. We primarily focus on multiple-choice question answering tasks that require external knowledge to answer questions. We investigate the effects of utilizing external in-domain multiple-choice question answering datasets and enriching the reference corpus by external out-domain corpora (i.e., Wikipedia articles). Experimental results demonstrate the effectiveness of external knowledge on two challenging multiple-choice question answering tasks: ARC and OpenBookQA.", "", "", "", "Question classification plays an important role in question answering. Features are the key to obtain an accurate question classifier. In contrast to Li and Roth (2002)'s approach which makes use of very rich feature space, we propose a compact yet effective feature set. In particular, we propose head word feature and present two approaches to augment semantic features of such head words using WordNet. In addition, Lesk's word sense disambiguation (WSD) algorithm is adapted and the depth of hypernym feature is optimized. With further augment of other standard features such as unigrams, our linear SVM and Maximum Entropy (ME) models reach the accuracy of 89.2 and 89.0 respectively over a standard benchmark dataset, which outperform the best previously reported accuracy of 86.2 .", "To respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.", "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification." ] }
1908.05146
2968782052
Real-time 3D reconstruction from RGB-D sensor data plays an important role in many robotic applications, such as object modeling and mapping. The popular method of fusing depth information into a truncated signed distance function (TSDF) and applying the marching cubes algorithm for mesh extraction has severe issues with thin structures: not only does it lead to loss of accuracy, but it can generate completely wrong surfaces. To address this, we propose the directional TSDF - a novel representation that stores opposite surfaces separate from each other. The marching cubes algorithm is modified accordingly to retrieve a coherent mesh representation. We further increase the accuracy by using surface gradient-based ray casting for fusing new measurements. We show that our method outperforms state-of-the-art TSDF reconstruction algorithms in mesh accuracy.
Surface reconstruction from range data has been an active research topic for a long time. It gained in popularity through the availability of affordable depth cameras and parallel computing hardware. Zollhöfer al @cite_10 give a comprehensive overview on modern 3D reconstruction from RGB-D data. The two main streams or research are TSDF fusion @cite_25 @cite_1 and surfel extraction @cite_4 @cite_6 . TSDF-based methods make up the majority, due to their simplicity and mesh output. Surfels, however, maintain the surface and observation direction in form of a normal per surfel. Thus they can distinguish observations from different sides. Another interesting approach is presented by Sch " @cite_17 , who triangulate surfels to create a mesh representation.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_6", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "1965851203", "2963020433", "", "2805202215", "1987648924", "2892904655" ], "abstract": [ "Highlights? Multi-resolution surfel maps as compact RGB-D image representation. ? Maps support rapid extraction from images and fast registration on CPU. ? Object and scene reconstruction through on-line graph optimization of key view poses. ? Real-time object tracking from a wide range of view angles and distances. ? State-of-the-art results of image registration, SLAM, and tracking on benchmark datasets. Building consistent models of objects and scenes from moving sensors is an important prerequisite for many recognition, manipulation, and navigation tasks. Our approach integrates color and depth measurements seamlessly in a multi-resolution map representation. We process image sequences from RGB-D cameras and consider their typical noise properties. In order to align the images, we register view-based maps efficiently on a CPU using multi-resolution strategies. For simultaneous localization and mapping (SLAM), we determine the motion of the camera by registering maps of key views and optimize the trajectory in a probabilistic framework. We create object models and map indoor scenes using our SLAM approach which includes randomized loop closing to avoid drift. Camera motion relative to the acquired models is then tracked in real-time based on our registration method. We benchmark our method on publicly available RGB-D datasets, demonstrate accuracy, efficiency, and robustness of our method, and compare it with state-of-the-art approaches. We also report on several successful public demonstrations where it was used in mobile manipulation tasks.", "Mesh plays an indispensable role in dense realtime reconstruction essential in robotics. Efforts have been made to maintain flexible data structures for 3D data fusion, yet an efficient incremental framework specifically designed for online mesh storage and manipulation is missing. We propose a novel framework to compactly generate, update, and refine mesh for scene reconstruction upon a volumetric representation. Maintaining a spatial-hashed field of cubes, we distribute vertices with continuous value on discrete edges that support O(1) vertex accessing and forbid memory redundancy. By introducing Hamming distance in mesh refinement, we further improve the mesh quality regarding the triangle type consistency with a low cost. Lock-based and lock-free operations were applied to avoid thread conflicts in GPU parallel computation. Experiments demonstrate that the mesh memory consumption is significantly reduced while the running speed is kept in the online reconstruction process.", "", "", "We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.", "We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system). In contrast to most existing approaches, we do not fuse depth measurements in a volume but in a dense surfel cloud. We asynchronously (re)triangulate the smoothed surfels to reconstruct a surface mesh. This novel approach enables to maintain a dense surface representation of the scene during SLAM which can quickly adapt to loop closures. This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera's resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin objects since objects do not need to enclose a volume. We demonstrate our approach in a number of experiments, showing that it produces reconstructions that are competitive with the state-of-the-art, and we discuss its advantages and limitations. The algorithm (excluding loop closure functionality) is available as open source at this https URL." ] }
1908.05146
2968782052
Real-time 3D reconstruction from RGB-D sensor data plays an important role in many robotic applications, such as object modeling and mapping. The popular method of fusing depth information into a truncated signed distance function (TSDF) and applying the marching cubes algorithm for mesh extraction has severe issues with thin structures: not only does it lead to loss of accuracy, but it can generate completely wrong surfaces. To address this, we propose the directional TSDF - a novel representation that stores opposite surfaces separate from each other. The marching cubes algorithm is modified accordingly to retrieve a coherent mesh representation. We further increase the accuracy by using surface gradient-based ray casting for fusing new measurements. We show that our method outperforms state-of-the-art TSDF reconstruction algorithms in mesh accuracy.
In contrast to the related works, we are proposing an improved representation based on the TSDF that utilizes the idea from Henry @cite_12 to represent surfaces with different orientations separate from each other. The implementation is based on the work of Dong al @cite_1 , which also serves as a baseline for state-of-the-art methods using voxel projection and the marching cubes algorithm. Also we apply the gradient-based ray casting concept from @cite_5 . The key features of our method are: the directional TSDF representation that divides the modeled volume into six directions, thereby separately representing surfaces with different orientations, a gradient-based ray casting fusion for improved results, a thread-safe parallelization of ray casting fusion, and a modified marching cubes algorithm for mesh extraction from this representation.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_12" ], "mid": [ "2198680548", "2963020433", "" ], "abstract": [ "We introduce a novel approach to simultaneous localization and mapping for robots equipped with a 2D laser scanner. In particular, we propose a fast scan registration algorithm that operates on 2D maps represented as a signed distance function (SDF). Using SDFs as a map representation has several advantages over existing approaches: while classical 2D scan matchers employ brute-force matching to track the position of the robot, signed distance functions are differentiable on large parts of the map. Consequently, efficient minimization techniques such as Gauss-Newton can be applied to find the minimum. In contrast to occupancy grid maps, the environment can be captured with sub-grid cell size precision, which leads to a higher localization accuracy. Furthermore, SDF maps can be triangulated to polygon maps for efficient storage and transfer. In a series of experiments, conducted both in simulation and on a real physical platform, we demonstrate that SDF tracking is more accurate and efficient than previous approaches. We outperform scan matching on occupancy maps in simulation by ∼270 in terms of root mean squared deviation (RMSD) with a ∼63 lower standard deviation. In the real robot experiments, we obtain a performance advantage of ∼14 RMSD with a ∼25 lower standard deviation.", "Mesh plays an indispensable role in dense realtime reconstruction essential in robotics. Efforts have been made to maintain flexible data structures for 3D data fusion, yet an efficient incremental framework specifically designed for online mesh storage and manipulation is missing. We propose a novel framework to compactly generate, update, and refine mesh for scene reconstruction upon a volumetric representation. Maintaining a spatial-hashed field of cubes, we distribute vertices with continuous value on discrete edges that support O(1) vertex accessing and forbid memory redundancy. By introducing Hamming distance in mesh refinement, we further improve the mesh quality regarding the triangle type consistency with a low cost. Lock-based and lock-free operations were applied to avoid thread conflicts in GPU parallel computation. Experiments demonstrate that the mesh memory consumption is significantly reduced while the running speed is kept in the online reconstruction process.", "" ] }
1908.05318
2968312879
We investigate the performance of a jet identification algorithm based on interaction networks (JEDI-net) to identify all-hadronic decays of high-momentum heavy particles produced at the LHC and distinguish them from ordinary jets originating from the hadronization of quarks and gluons. The jet dynamics is described as a set of one-to-one interactions between the jet constituents. Based on a representation learned from these interactions, the jet is associated to one of the considered categories. Unlike other architectures, the JEDI-net models achieve their performance without special handling of the sparse input jet representation, extensive pre-processing, particle ordering, or specific assumptions regarding the underlying detector geometry. The presented models give better results with less model parameters, offering interesting prospects for LHC applications.
Jet tagging is one of the most popular LHC-related tasks to which DL solutions have been applied. Several classification algorithms have been studied in the context of jet tagging at the LHC @cite_47 @cite_1 @cite_5 @cite_10 @cite_2 @cite_12 @cite_43 @cite_13 using DNNs, CNNs, or physics-inspired architectures. Recurrent and recursive layers have been used to construct jet classifiers starting from a list of reconstructed particle momenta @cite_3 @cite_23 @cite_46 . Recently, these different approaches, applied to the specific case of top quark jet identification, have been compared in Ref. @cite_19 . While many of these studies focus on data analysis, work is underway to apply these algorithms in the early stages of LHC real-time event processing, i.e. the trigger system. For example, Ref. @cite_11 focuses on converting these models into firmware for field programmable gate arrays (FPGAs) optimized for low latency (less than 1 @math s). If successful, such a program could allow for a more resource-efficient and effective event selection for future LHC runs.
{ "cite_N": [ "@cite_13", "@cite_46", "@cite_1", "@cite_3", "@cite_43", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_47", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "", "2952645975", "2491766731", "2951016929", "2563484019", "2916245928", "2769967085", "2787586057", "2792435888", "2257617748", "2760946714", "2586557507", "2798084934" ], "abstract": [ "", "Since the machine learning techniques are improving rapidly, it has been shown that the image recognition techniques in deep neural networks can be used to detect jet substructure. And it turns out that deep neural networks can match or outperform traditional approach of expert features. However, there are disadvantages such as sparseness of jet images. Based on the natural tree-like structure of jet sequential clustering, the recursive neural networks (RecNNs), which embed jet clustering history recursively as in natural language processing, have a better behavior when confronted with these problems. We thus try to explore the performance of RecNN in quark gluon discrimination. In order to indicate the realistic potential at the LHC, We include the detector simulation in our data preparation. We attempt to implement particle flow identification in one-hot vectors or using instead a recursively defined pt-weighted charge. The results show that RecNNs work better than the baseline BDT by a few percent in gluon rejection at the working point of 50 quark acceptance. However, extra implementation of particle flow identification only increases the performance slightly. We also experimented on some relevant aspects which might influence the performance of networks. It shows even only particle flow identification as input feature without any extra information on momentum or angular position is already giving a fairly good result, which indicates that most of the information for q g discrimination is already included in the tree-structure itself. As a bonus, a rough u d discrimination is also explored.", "Classification of jets as originating from light-flavor or heavy-flavor quarks is an important task for inferring the nature of particles produced in high-energy collisions. The large and variable dimensionality of the data provided by the tracking detectors makes this task difficult. The current state-of-the-art tools require expert data-reduction to convert the data into a fixed low-dimensional form that can be effectively managed by shallow classifiers. We study the application of deep networks to this task, attempting classification at several levels of data, starting from a raw list of tracks. We find that the highest-level lowest-dimensionality expert information sacrifices information needed for classification, that the performance of current state-of-the-art taggers can be matched or slightly exceeded by deep-network-based taggers using only track and vertex information, that classification using only lowest-level highest-dimensionality tracking information remains a difficult task for deep networks, and that adding lower-level track and vertex information to the classifiers provides a significant boost in performance compared to the state-of-the-art.", "Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images. In this work, we present a novel class of recursive neural networks built instead upon an analogy between QCD and natural languages. In the analogy, four-momenta are like words and the clustering history of sequential recombination jet algorithms is like the parsing of a sentence. Our approach works directly with the four-momenta of a variable-length set of particles, and the jet-based tree structure varies on an event-by-event basis. Our experiments highlight the flexibility of our method for building task-specific jet embeddings and show that recursive architectures are significantly more accurate and data efficient than previous image-based networks. We extend the analogy from individual jets (sentences) to full events (paragraphs), and show for the first time an event-level classifier operating on all the stable particles produced in an LHC event.", "Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. To establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, the deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.", "Based on the established task of identifying boosted, hadronically decaying top quarks, we compare a wide range of modern machine learning approaches. We find that they are extremely powerful and great fun.", "Multivariate techniques based on engineered features have found wide adoption in the identification of jets resulting from hadronic top decays at the Large Hadron Collider (LHC). Recent Deep Learning developments in this area include the treatment of the calorimeter activation as an image or supplying a list of jet constituent momenta to a fully connected network. This latter approach lends itself well to the use of Recurrent Neural Networks. In this work the applicability of architectures incorporating Long Short-Term Memory (LSTM) networks is explored. Several network architectures, methods of ordering of jet constituents, and input pre-processing are studied. The best performing LSTM network achieves a background rejection of 100 for 50 signal efficiency. This represents more than a factor of two improvement over a fully connected Deep Neural Network (DNN) trained on similar types of inputs.", "We introduce a new and highly efficient tagger for hadronically decaying top quarks, based on a deep neural network working with Lorentz vectors and the Minkowski metric. With its novel machine learning setup and architecture it allows us to identify boosted top quarks not only from calorimeter towers, but also including tracking information. We show how the performance of our tagger compares with QCD-inspired and image-recognition approaches and find that it significantly increases the performance for strongly boosted top quarks.", "We apply computer vision with deep learning -- in the form of a convolutional neural network (CNN) -- to build a highly effective boosted top tagger. Previous work (the \"DeepTop\" tagger of ) has shown that a CNN-based top tagger can achieve comparable performance to state-of-the-art conventional top taggers based on high-level inputs. Here, we introduce a number of improvements to the DeepTop tagger, including architecture, training, image preprocessing, sample size and color pixels. Our final CNN top tagger outperforms BDTs based on high-level inputs by a factor of @math --3 or more in background rejection, over a wide range of tagging efficiencies and fiducial jet selections. As reference points, we achieve a QCD background rejection factor of 500 (60) at 50 top tagging efficiency for fully-merged (non-merged) top jets with @math in the 800--900 GeV (350--450 GeV) range. Our CNN can also be straightforwardly extended to the classification of other types of jets, and the lessons learned here may be useful to others designing their own deep NNs for LHC applications.", "Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. Finally, this interplay between physicallymotivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.", "Previous studies have demonstrated the utility and applicability of machine learning techniques to jet physics. In this paper, we construct new observables for the discrimination of jets from different originating particles exclusively from information identified by the machine. The approach we propose is to first organize information in the jet by resolved phase space and determine the effective @math -body phase space at which discrimination power saturates. This then allows for the construction of a discrimination observable from the @math -body phase space coordinates. A general form of this observable can be expressed with numerous parameters that are chosen so that the observable maximizes the signal vs. background likelihood. Here, we illustrate this technique applied to discrimination of @math decays from massive @math splittings. We show that for a simple parametrization, we can construct an observable that has discrimination power comparable to, or better than, widely-used observables motivated from theory considerations. For the case of jets on which modified mass-drop tagger grooming is applied, the observable that the machine learns is essentially the angle of the dominant gluon emission off of the @math pair.", "Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.", "Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA hardware has only just begun. FPGA-based trigger and data acquisition (DAQ) systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. We develop a package based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns." ] }
1908.05217
2967452169
Recent advances in deep learning greatly boost the performance of object detection. State-of-the-art methods such as Faster-RCNN, FPN and R-FCN have achieved high accuracy in challenging benchmark datasets. However, these methods require fully annotated object bounding boxes for training, which are incredibly hard to scale up due to the high annotation cost. Weakly-supervised methods, on the other hand, only require image-level labels for training, but the performance is far below their fully-supervised counterparts. In this paper, we propose a semi-supervised large scale fine-grained detection method, which only needs bounding box annotations of a smaller number of coarse-grained classes and image-level labels of large scale fine-grained classes, and can detect all classes at nearly fully-supervised accuracy. We achieve this by utilizing the correlations between coarse-grained and fine-grained classes with shared backbone, soft-attention based proposal re-ranking, and a dual-level memory module. Experiment results show that our methods can achieve close accuracy on object detection to state-of-the-art fully-supervised methods on two large scale datasets, ImageNet and OpenImages, with only a small fraction of fully annotated classes.
There are only a few research works in the semi-supervised detection field. @cite_35 proposes a LSDA-based method that can handle disjoint set semi-supervised detection, but this method is not end-to-end trainable and cannot be easily extended to state-of-the-art detection frameworks. @cite_20 proposes a semiMIL method on disjoint set semi-supervised detection, which achieves better performance than @cite_35 . Note-RCNN @cite_21 proposes a mining and training scheme for semi-supervised detection, but it needs seed boxes for all categories. YOLO 9000 @cite_41 can also be viewed as a semi-supervised detection framework, but it is no more than a naive combination of detection and classification stream and only relies on the implicit shared feature learning from the network.
{ "cite_N": [ "@cite_41", "@cite_35", "@cite_21", "@cite_20" ], "mid": [ "2570343428", "2418676188", "2964316443", "2962685835" ], "abstract": [ "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "Deep CNN-based object detection systems have achieved remarkable success on several large-scale object detection benchmarks. However, training such detectors requires a large number of labeled bounding boxes, which are more difficult to obtain than image-level annotations. Previous work addresses this issue by transforming image-level classifiers into object detectors. This is done by modeling the differences between the two on categories with both imagelevel and bounding box annotations, and transferring this information to convert classifiers to detectors for categories without bounding box annotations. We improve this previous work by incorporating knowledge about object similarities from visual and semantic domains during the transfer process. The intuition behind our proposed method is that visually and semantically similar categories should exhibit more common transferable properties than dissimilar categories, e.g. a better detector would result by transforming the differences between a dog classifier and a dog detector onto the cat class, than would by transforming from the violin class. Experimental results on the challenging ILSVRC2013 detection dataset demonstrate that each of our proposed object similarity based knowledge transfer methods outperforms the baseline methods. We found strong evidence that visual similarity and semantic relatedness are complementary for the task, and when combined notably improve detection, achieving state-of-the-art detection performance in a semi-supervised setting.", "In recent years, the performance of object detection has advanced significantly with the evolution of deep convolutional neural networks. However, the state-of-the-art object detection methods still rely on accurate bounding box annotations that require extensive human labeling. Object detection without bounding box annotations, that is, weakly supervised detection methods, are still lagging far behind. As weakly supervised detection only uses image level labels and does not require the ground truth of bounding box location and label of each object in an image, it is generally very difficult to distill knowledge of the actual appearances of objects. Inspired by curriculum learning, this paper proposes an easy-to-hard knowledge transfer scheme that incorporates easy web images to provide prior knowledge of object appearance as a good starting point. While exploiting large-scale free web imagery, we introduce a sophisticated labor-free method to construct a web dataset with good diversity in object appearance. After that, semantic relevance and distribution relevance are introduced and utilized in the proposed curriculum training scheme. Our end-to-end learning with the constructed web data achieves remarkable improvement across most object classes, especially for the classes that are often considered hard in other works.", "We propose to revisit knowledge transfer for training object detectors on target classes from weakly supervised training images, helped by a set of source classes with bounding-box annotations. We present a unified knowledge transfer framework based on training a single neural network multi-class object detector over all source classes, organized in a semantic hierarchy. This generates proposals with scores at multiple levels in the hierarchy, which we use to explore knowledge transfer over a broad range of generality, ranging from class-specific (bycicle to motorbike) to class-generic (objectness to any class). Experiments on the 200 object classes in the ILSVRC 2013 detection dataset show that our technique (1) leads to much better performance on the target classes (70.3 CorLoc, 36.9 mAP) than a weakly supervised baseline which uses manually engineered objectness [11] (50.5 CorLoc, 25.4 mAP). (2) delivers target object detectors reaching 80 of the mAP of their fully supervised counterparts. (3) outperforms the best reported transfer learning results on this dataset (+41 CorLoc and +3 mAP over [18, 46], +16.2 mAP over [32]). Moreover, we also carry out several across-dataset knowledge transfer experiments [27, 24, 35] and find that (4) our technique outperforms the weakly supervised baseline in all dataset pairs by 1.5 A— - 1.9A—, establishing its general applicability." ] }
1908.05156
2966926636
The spectacular success of Bitcoin and Blockchain Technology in recent years has provided enough evidence that a widespread adoption of a common cryptocurrency system is not merely a distant vision, but a scenario that might come true in the near future. However, the presence of Bitcoin's obvious shortcomings such as excessive electricity consumption, unsatisfying transaction throughput, and large validation time (latency) makes it clear that a new, more efficient system is needed. We propose a protocol in which a set of nodes maintains and updates a linear ordering of transactions that are being submitted by users. Virtually every cryptocurrency system has such a protocol at its core, and it is the efficiency of this protocol that determines the overall throughput and latency of the system. We develop our protocol on the grounds of the well-established field of Asynchronous Byzantine Fault Tolerant (ABFT) systems. This allows us to formally reason about correctness, efficiency, and security in the strictest possible model, and thus convincingly prove the overall robustness of our solution. Our protocol improves upon the state-of-the-art HoneyBadgerBFT by by reducing the asymptotic latency while matching the optimal communication complexity. Furthermore, in contrast to the above, our protocol does not require a trusted dealer thanks to a novel implementation of a trustless ABFT Randomness Beacon.
Atomic Broadcast. For an excellent introduction to the field of Distributed Computing and overview of Atomic Broadcast and Consensus protocols we refer the reader to the book @cite_53 . A more recent work of @cite_6 surveys existing consensus protocols in the context of cryptocurrency systems.
{ "cite_N": [ "@cite_53", "@cite_6" ], "mid": [ "1794148987", "2726001757" ], "abstract": [ "In modern computing a program is usually distributed among several processes. The fundamental challenge when developing reliable and secure distributed programs is to support the cooperation of processes required to execute a common task, even when some of these processes fail. Failures may range from crashes to adversarial attacks by malicious processes.Cachin, Guerraoui, and Rodrigues present an introductory description of fundamental distributed programming abstractions together with algorithms to implement them in distributed systems, where processes are subject to crashes and malicious attacks. The authors follow an incremental approach by first introducing basic abstractions in simple distributed environments, before moving to more sophisticated abstractions and more challenging environments. Each core chapter is devoted to one topic, covering reliable broadcast, shared memory, consensus, and extensions of consensus. For every topic, many exercises and their solutions enhance the understanding This book represents the second edition of \"Introduction to Reliable Distributed Programming\". Its scope has been extended to include security against malicious actions by non-cooperating processes. This important domain has become widely known under the name \"Byzantine fault-tolerance\".", "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. The protocol comparison covers Hyperledger Fabric, Tendermint, Symbiont, R3 Corda, Iroha, Kadena, Chain, Quorum, MultiChain, Sawtooth Lake, Ripple, Stellar, and IOTA." ] }
1908.05156
2966926636
The spectacular success of Bitcoin and Blockchain Technology in recent years has provided enough evidence that a widespread adoption of a common cryptocurrency system is not merely a distant vision, but a scenario that might come true in the near future. However, the presence of Bitcoin's obvious shortcomings such as excessive electricity consumption, unsatisfying transaction throughput, and large validation time (latency) makes it clear that a new, more efficient system is needed. We propose a protocol in which a set of nodes maintains and updates a linear ordering of transactions that are being submitted by users. Virtually every cryptocurrency system has such a protocol at its core, and it is the efficiency of this protocol that determines the overall throughput and latency of the system. We develop our protocol on the grounds of the well-established field of Asynchronous Byzantine Fault Tolerant (ABFT) systems. This allows us to formally reason about correctness, efficiency, and security in the strictest possible model, and thus convincingly prove the overall robustness of our solution. Our protocol improves upon the state-of-the-art HoneyBadgerBFT by by reducing the asymptotic latency while matching the optimal communication complexity. Furthermore, in contrast to the above, our protocol does not require a trusted dealer thanks to a novel implementation of a trustless ABFT Randomness Beacon.
In this paper we propose a different assumption on the transaction buffers that allows us to better demonstrate the capabilities of our protocol when it comes to latency. We assume that at every round the ratio between lengths of transaction buffers of any two honest nodes is at most a fixed constant. In this model, our protocol achieves @math latency, while a natural adaptation of HBBFT would achieve latency @math , thus again a factor- @math improvement. A qualitative improvement over HBBFT that we achieve in this paper is that we completely get rid of the trusted dealer assumption. We also note that the definitions of Atomic Broadcast slightly differ between this paper and @cite_5 : we achieve Censorship Resilience assuming that it was input to even a single honest node, while in @cite_5 it has to be input to @math nodes.
{ "cite_N": [ "@cite_5" ], "mid": [ "2534313446" ], "abstract": [ "The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network." ] }
1908.04933
2968171141
Re-Pair is a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large scale data sets. As a solution for this problem we present, given a text of length @math whose characters are drawn from an integer alphabet, an @math time algorithm computing Re-Pair in @math bits of space including the text space, where @math is the number of terminals and non-terminals. The algorithm works in the restore model, supporting the recovery of the original input in the time for the Re-Pair computation with @math additional bits of working space. We give variants of our solution working in parallel or in the external memory model.
In-Place String Algorithms For the LZ77 factorization @cite_7 , present an algorithm computing this factorization with n d words on top of the input space in dn time for a variable @math , achieving 1 words with n^2 time. For the suffix sorting problem, gave an algorithm to compute the suffix array with n bits on top of the output in n time if each character of the alphabet is present in the text. This condition got improved to alphabet sizes of at most @math by . Finally, showed how to transform a text into its Burrows-Wheeler transform by using n of additional bits. Due to , this algorithm got extended to compute simultaneously the LCP array with n bits of additional working space.
{ "cite_N": [ "@cite_7" ], "mid": [ "2911767650" ], "abstract": [ "The compression is an important topic in computer science which allows we to storage more amount of data on our data storage. There are several techniques to compress any file. In this manuscript will be described the most important algorithm to compress images such as JPEG and it will be compared with another method to retrieve good reason to not use this method on images. So to compress the text the most encoding technique known is the Huffman Encoding which it will be explained in exhaustive way. In this manuscript will showed how to compute a text compression method on images in particular the method and the reason to choice a determinate image format against the other. The method studied and analyzed in this manuscript is the Re-Pair algorithm which is purely for grammatical context to be compress. At the and it will be showed the good result of this application." ] }