aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
-- The interactive linking of text and visualizations has only been explored to some extent. Beck and Weiskopf @cite_36 propose an abstract interaction model for documents containing text, word-sized graphics, and regular visualizations; all three types of data representations are linked via brushing-and-linking. @cite_8 describe an authoring solution for web documents to produce some of those interactions. Our interaction model also uses and extends the model by Beck and Weiskopf. @cite_57 advocate for linking to facilitate document reading. In their approach, the linking is supported between text in the main body and text in tables. Few of the systems that generate both text and visualization---for instance, @cite_31 and @cite_28 ---discuss interactions, but still focus more on explanations and offer limited data exploration. @cite_27 , in contrast, focuses more on interactions and supports the data exploration process by offering short descriptions about key findings in the data. However, it does not generate a comprehensive report with longer descriptions.
{ "cite_N": [ "@cite_8", "@cite_36", "@cite_28", "@cite_57", "@cite_27", "@cite_31" ], "mid": [ "2889330378", "2590534375", "2923187851", "2897132999", "2888660171", "2905178843" ], "abstract": [ "", "Generating visualizations at the size of a word creates dense information representations often called sparklines . The integration of word-sized graphics into text could avoid additional cognitive load caused by splitting the readers’ attention between figures and text. In scientific publications, these graphics make statements easier to understand and verify because additional quantitative information is available where needed. In this work, we perform a literature review to find out how researchers have already applied such word-sized representations. Illustrating the versatility of the approach, we leverage these representations for reporting empirical and bibliographic data in three application examples. For interactive Web-based publications, we explore levels of interactivity and discuss interaction patterns to link visualization and text. We finally call the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats.", "Abstract Bivariate map visualizations use different encodings to visualize two variables but comparison across multiple encodings is challenging. Compared to a univariate visualization, it is significantly harder to read regional differences and spot geographical outliers. Especially targeting inexperienced users of visualizations, we advocate the use of natural language text for augmenting map visualizations and understanding the relationship between two geo-statistical variables. We propose an approach that selects interesting findings from data analysis, generates a respective text and visualization, and integrates both into a single document. The generated reports interactively link the visualization with the textual narrative. Users can get additional explanations and have the ability to compare different regions. The text generation process is flexible and adapts to various geographical and contextual settings based on small sets of parameters. We showcase this flexibility through a number of application examples.", "Document authors commonly use tables to support arguments presented in the text. But, because tables are usually separate from the main body text, readers must split their attention between different parts of the document. We present an interactive document reader that automatically links document text with corresponding table cells. Readers can select a sentence (or tables cells) and our reader highlights the relevant table cells (or sentences). We provide an automatic pipeline for extracting such references between sentence text and table cells for existing PDF documents that combines structural analysis of tables with natural language processing and rule-based matching. On a test corpus of 330 (sentence, table) pairs, our pipeline correctly extracts 48.8 of the references. An additional 30.5 contain only false negatives (FN) errors -- the reference is missing table cells. The remaining 20.7 contain false positives (FP) errors -- the reference includes extraneous table cells and could therefore mislead readers. A user study finds that despite such errors, our interactive document reader helps readers match sentences with corresponding table cells more accurately and quickly than a baseline document reader.", "Recently, an increasing number of visualization systems have begun to incorporate natural language generation (NLG) capabilities into their interfaces. NLG-based visualization systems typically leverage a suite of statistical functions to automatically extract key facts about the underlying data and surface them as natural language sentences alongside visualizations. With current systems, users are typically required to read the system-generated sentences and mentally map them back to the accompanying visualization. However, depending on the features of the visualization (e.g., visualization type, data density) and the complexity of the data fact, mentally mapping facts to visualizations can be a challenging task. Furthermore, more than one visualization could be used to illustrate a single data fact. Unfortunately, current tools provide little or no support for users to explore such alternatives. In this paper, we explore how system-generated data facts can be treated as interactive widgets to help users interpret visualizations and communicate their findings. We present Voder , a system that lets users interact with automatically-generated data facts to explore both alternative visualizations to convey a data fact as well as a set of embellishments to highlight a fact within a visualization. Leveraging data facts as interactive widgets, Voder also facilitates data fact-based visualization search. To assess Voder's design and features, we conducted a preliminary user study with 12 participants having varying levels of experience with visualization tools. Participant feedback suggested that interactive data facts aided them in interpreting visualizations. Participants also stated that the suggestions surfaced through the facts helped them explore alternative visualizations and embellishments to communicate individual data facts.", "Publication records and collaboration networks are important for assessing the expertise and experience of researchers. Existing digital libraries show the raw publication lists in author profiles, whereas visualization techniques focus on specific subproblems. Instead, we look at publication records from various perspectives mixing low-level publication data with high-level abstractions and background information. This work presents VIS Author Profiles, a novel approach to generate integrated textual and visual descriptions to highlight patterns in publication records. We leverage template-based natural language generation to summarize notable publication statistics, evolution of research topics, and collaboration relationships. Seamlessly integrated visualizations augment the textual description and are interactively connected with each other and the text. The underlying publication data and detailed explanations of the analysis are available on demand. We compare our approach to existing systems by taking into account information needs of users and demonstrate its usefulness in two realistic application examples." ] }
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
In summary, although existing approaches present source code information, they lack in putting data into context and providing explanations. None of the systems, also outside the software engineering community, supports exploranation as a process blending explanations and explorations in a way that we envision leveraging the interactive combination of textual and visual descriptions. We are inspired by the abstract idea of interactive linking of text and visualizations by Beck and Weiskopf @cite_36 to support exploranation. We adopt CK, QMOOD, and McCabe's metrics (listed in Table ) and use them in combination with pre-defined thresholds to analyze and present source code quality.
{ "cite_N": [ "@cite_36" ], "mid": [ "2590534375" ], "abstract": [ "Generating visualizations at the size of a word creates dense information representations often called sparklines . The integration of word-sized graphics into text could avoid additional cognitive load caused by splitting the readers’ attention between figures and text. In scientific publications, these graphics make statements easier to understand and verify because additional quantitative information is available where needed. In this work, we perform a literature review to find out how researchers have already applied such word-sized representations. Illustrating the versatility of the approach, we leverage these representations for reporting empirical and bibliographic data in three application examples. For interactive Web-based publications, we explore levels of interactivity and discuss interaction patterns to link visualization and text. We finally call the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats." ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
Having deep networks for generalizing to wide-range datasets is a common practice . Although the depth influences the performance of a neural network, training such networks is difficult and the general intelligence of this kind of a network is questionable @cite_23 . Hence, more attention is paid towards generally intelligent neural networks. One such approach is to harvest more information within a layer in a neural network. Capsule networks @cite_9 @cite_19 involves extracting information about pose and orientation where, instead of convolutional scalars, there are vectors and matrices as layer outputs.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_23" ], "mid": [ "2963703618", "2785994986", "" ], "abstract": [ "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45 compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network.", "" ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
The common multitasking (MTL) in computer vision refers to processing multiple different tasks in a single input (e.g., semantic segmentation and surface-normal prediction). Conventional approaches to MTL includes using shared layers to some extent along with task-specific layers. Choosing the number of task-specific layers and shared layers is task dependant. However, recent approaches solve the problem of choosing from possible combinations in this context by letting the model to learn the use of shared and task-specific layers according to the task. Cross-stitch networks @cite_44 and sluice networks @cite_10 introduce sharing resources between parallel networks where communication between parallel layers is done through learning a linear combination of parallel tensors. NDDR-CNN @cite_20 use discriminative dimensionality reduction to fuse features from parallel tensors.
{ "cite_N": [ "@cite_44", "@cite_10", "@cite_20" ], "mid": [ "1899309388", "2966182616", "2964247799" ], "abstract": [ "In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture? We propose to build upon the decades of hard work in 3D scene understanding to design a new CNN architecture for the task of surface normal estimation. We show that incorporating several constraints (man-made, Manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "" ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
Lifelong learning involves learning from multiple datasets one after the other. Conventional approaches include fine tuning @cite_12 and feature extraction @cite_14 which suffer from catastrophic forgetting. Rebuffi al @cite_7 introduced incremental classification as opposed to batch training in order to overcome catastrophic forgetting. Learning without Forgetting (LwF) @cite_47 and Elastic Weight Consolidation (EWC) @cite_39 are also two approaches introduced to overcome this issue in terms of modifying the objective function. In contrast, PackNet @cite_21 and Piggyback @cite_38 methods use binary masking on dense weight filters once trained in a dataset in order to free up the least used weights to learn from the next dataset. However, these approaches need larger filters depending on the number of datasets to be trained sequentially.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_7", "@cite_21", "@cite_39", "@cite_47", "@cite_12" ], "mid": [ "2791091755", "2155541015", "2964189064", "2963072899", "2560647685", "2473930607", "2102605133" ], "abstract": [ "This work presents a method for adapting a single, fixed deep neural network to multiple tasks without affecting performance on already learned tasks. By building upon ideas from network quantization and pruning, we learn binary masks that “piggyback” on an existing network, or are applied to unmodified weights of that network to provide good performance on a new task. These masks are learned in an end-to-end differentiable fashion, and incur a low overhead of 1 bit per network parameter, per task. Even though the underlying network is fixed, the ability to mask individual weights allows for the learning of a large number of filters. We show performance comparable to dedicated fine-tuned networks for a variety of classification tasks, including those with large domain shifts from the initial task (ImageNet), and a variety of network architectures. Our performance is agnostic to task ordering and we do not suffer from catastrophic forgetting or competition between tasks.", "We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.", "A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.", "This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially \"pack\" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.", "Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.", "When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
Approaches that can gradually build customized networks according to the input are also inspiring for our research. ConvNet-AIG @cite_33 and BlockDrop @cite_13 are two approaches introduced for data dependant choosing of residual blocks in a deep network as alternatives to conventional Residual Networks @cite_31 . These approaches learn which residual block to keep according to the nature of the input.
{ "cite_N": [ "@cite_31", "@cite_13", "@cite_33" ], "mid": [ "2194775991", "2962944050", "2884751099" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20 on average, going as high as 36 for some images, while maintaining the same 76.4 top-1 accuracy on ImageNet.", "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using (20 ) and (33 ) less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms." ] }
1907.11416
2966213978
In this article, we study generalized liar's dominating set problem in graphs. Let @math be a simple undirected graph. The generalized liar's dominating set, called as the distance- @math @math -liar's dominating set, is a subset @math such that (i) each vertex in @math is distance- @math dominated by at least @math vertices in @math , and (ii) each pair of distinct vertices in @math is distance- @math dominated by at least @math vertices in @math , where @math are positive integers and @math . Here, a vertex @math is distance- @math dominated by another vertex @math means the shortest path distance between @math and @math is at most @math in @math . We first consider distance-1 @math -liar's dominating set problem and prove that it is NP-complete. Next, we consider distance- @math @math -liar's dominating set problem and show that it is also NP-complete. These liar's dominating set problems are generalized version of liar's dominating set problem as researcher studied only distance- @math @math -liar's dominating set problem in literature. We also prove that (i) distance-1 @math -liar's dominating set problem cannot be approximated within a factor of @math for any @math , unless NP @math DTIME @math , and (ii) distance- @math @math -liar's dominating set problem cannot be approximated within a factor of @math for any @math , unless NP @math DTIME @math .
@cite_0 studied the approximability of the problem in general graphs and given an @math -factor approximation algorithm, where @math is the maximum degree of the given graph. For proper interval graphs also Panda and Paul @cite_9 considered the problem and proposed a linear time algorithm. They also studied the minimum distance- @math @math -LDS problem for bounded degree graphs, and @math -claw free graphs @cite_0 . Sterling @cite_8 presented bounds on liar's domination number by considering the problem on two-dimensional grid graphs.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_8" ], "mid": [ "2080619861", "1993437988", "120391622" ], "abstract": [ "A subset L ? V of a graph G = ( V , E ) is called a liar's dominating set of G if (i) | N G u ] ? L | ? 2 for every vertex u ? V , and (ii) | ( N G u ] ? N G v ] ) ? L | ? 3 for every pair of distinct vertices u , v ? V . The Min Liar Dom Set problem is to find a liar's dominating set of minimum cardinality of a given graph G and the Decide Liar Dom Set problem is the decision version of the Min Liar Dom Set problem. The Decide Liar Dom Set problem is known to be NP-complete for general graphs. In this paper, we first present approximation algorithms and hardness of approximation results of the Min Liar Dom Set problem in general graphs, bounded degree graphs, and p-claw free graphs. We then show that the Decide Liar Dom Set problem is NP-complete for doubly chordal graphs and propose a linear time algorithm for computing a minimum liar's dominating set in block graphs.", "Let G=(V,E) be a graph without isolated vertices and having at least 3 vertices. A set L@?V(G) is a liar@?s dominating set if (1) |N\"G[v]@?L|>=2 for all v@?V(G), and (2) |(N\"G[u]@?N\"G[v])@?L|>=3 for every pair u,v@?V(G) of distinct vertices in G, where N\"G[x]= y@?V|xy@?E @? x is the closed neighborhood of x in G. Given a graph G and a positive integer k, the liar@?s domination problem is to check whether G has a liar@?s dominating set of size at most k. The liar@?s domination problem is known to be NP-complete for general graphs. In this paper, we propose a linear time algorithm for computing a minimum cardinality liar@?s dominating set in a proper interval graph. We also strengthen the NP-completeness result of liar@?s domination problem for general graphs by proving that the problem remains NP-complete even for undirected path graphs which is a super class of proper interval graphs.", "" ] }
1907.11569
2965957874
Research on neural networks has gained significant momentum over the past few years. A plethora of neural networks is currently being trained on available data in research as well as in industry. Because training is a resource-intensive process and training data cannot always be made available to everyone, there has been a recent trend to attempt to re-use already-trained neural networks. As such, neural networks themselves have become research data. In this paper, we present the Neural Network Ontology, an ontology to make neural networks findable, accessible, interoperable and reusable as suggested by the well-established FAIR guiding principles for scientific data management and stewardship. We created the new FAIRnets Dataset that comprises about 2,000 neural networks openly accessible on the internet and uses the Neural Network Ontology to semantically annotate and represent the neural networks. For each of the neural networks in the FAIRnets Dataset, the relevant properties according to the Neural Network Ontology such as the description and the architecture are stored. Ultimately, the FAIRnets Dataset can be queried with a set of desired properties and responds with a set of neural networks that have these properties. We provide the service FAIRnets Search which is implemented on top of a SPARQL endpoint and allows for querying, searching and finding trained neural networks annotated with the Neural Network Ontology. The service is demonstrated by a browser-based frontend to the SPARQL endpoint.
Neural networks have been applied as a machine learning method to improve ontologies, in recent years. They are used to align @cite_1 @cite_6 @cite_2 , match @cite_13 @cite_21 or map ontologies @cite_4 @cite_19 . Furthermore, ontologies were combined with neural networks to solve different problems @cite_18 @cite_15 . However, there is no standard ontology to describe neureal networks. Still, there exists an ontology which focuses on the description of weights @cite_17 but does not fulfill the Linked-Data Principles.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2146235381", "2104411079", "2148985136", "1984978840", "1965078122", "2007226769", "1591339381", "2953208595", "161226219", "" ], "abstract": [ "In order to stimulate innovation during the collaborative process of new product and production development, especially to avoid duplicating existing techniques or infringing upon others’ patents and intellectual property rights, the collaborative team of research and development, and patent engineers must accurately identify relevant patent knowledge in a timely manner. This research develops a novel knowledge management approach using ontology-based artificial neural network (ANN) algorithm to automatically classify and search knowledge documents stored in huge online patent corpuses. This research focuses on developing a smart and semantic oriented classification and search from the sources of the most critical and well-structured knowledge publications, i.e. patents, to gain valuable and practical references for the collaborative networks of technology-centric product and production development teams. The research uses the domain ontology schema created using Protege and derives the semantic concept p...", "System interoperability is a well known issue, especially for heterogeneous information systems, where ontology- based representations may support automatic and user- transparent integration. In this paper we present X-SOM: an ontology mapping and integration tool. The contribution of our tool is a modular and extensible architecture that automatically combines several matching techniques by means of a neural network, performing also ontology debugging to avoid inconsistencies. Besides describing the tool components, we discuss the prototype implementation, which has been tested against the OAEI 2006 benchmark with promising results.", "With the emergence of the Semantic Web several domain ontologies were developed, which varied not only in their structure but also in the natural language used to define them. The lack of an integrated view of all web nodes and the existence of heterogeneous domain ontologies drive new challenges in the discovery of knowledge resources which are relevant to a user's request. New approaches have recently appeared for developing web intelligence and helping users avoid irrelevant results on the web. However, there remains some work to be done. This work makes a contribution by presenting an ANN-based ontology matching model for knowledge source discovery on the Semantic Web. Experimental results obtained on a real case study have shown that this model provides satisfactory responses.", "Achieving high match accuracy for a large variety of ontologies, considering a single matcher is often not sufficient for high match quality. Therefore, combining the corresponding weights for different semantic aspects, reflecting their different importance or contributions becomes unavoidable for ontology matching. Combining multiple measures into a single similarity metric has been traditionally solved using weights determined manually by an expert, or calculated through general methods e.g. average or sigmoid function, however this does not provide a flexible and self-configuring matching tool. In this paper, an intelligent combination using Artificial Neural Network ANN as a machine learning-based method to ascertain how to combine multiple similarity measures into a single aggregated metric with the final aim of improving the ontology alignment quality is proposed. XMap++ is applied to benchmark and anatomy tests at OAEI campaign 2012. Results show that neural network boosts the performance in most cases, and that the proposed novel approach is competitive with top-ranked system.", "Background Being formal, declarative knowledge representation models, ontologies help to address the problem of imprecise terminologies in biological and biomedical research. However, ontologies constructed under the auspices of the Open Biomedical Ontologies (OBO) group have exhibited a great deal of variety, because different parties can design ontologies according to their own conceptual views of the world. It is therefore becoming critical to align ontologies from different parties. During automated semi-automated alignment across biological ontologies, different semantic aspects, i.e., concept name, concept properties, and concept relationships, contribute in different degrees to alignment results. Therefore, a vector of weights must be assigned to these semantic aspects. It is not trivial to determine what those weights should be, and current methodologies depend a lot on human heuristics.", "Ontology mapping seeks to find semantic correspondences between similar elements of different ontologies. It is a key challenge to achieve semantic interoperability in building the Semantic Web. This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, i.e., the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. The approach first measures both linguistic and structural similarity of ontologies in a vector space model, and then aggregates them using an adaptive method based on their harmonies, which is defined as an estimator of performance of similarity. Finally to improve mapping accuracy the interactive activation and competition neural network is activated, if necessary, to search for a solution that can satisfy ontology constraints. The experimental results show that harmony is a good estimator of f-measure; the harmony based adaptive aggregation outperforms other aggregation methods; neural network approach significantly boosts the performance in most cases. Our approach is competitive with top-ranked systems on benchmark tests at OAEI campaign 2007, and performs the best on real cases in OAEI benchmark tests.", "The Semantic Web is based on technologies that make the content of the Web machine-understandable. In that framework, ontological knowledge representation has become an important tool for the analysis and understanding of multimedia information. Because of the distributed nature of the Semantic Web however, ontologies describing similar fields of knowledge are being developed and the data coming from similar but non-identical ontologies can be combined only if a semantic mapping between them is first established. This has lead to the development of several ontology alignment tools. We propose an automatic ontology alignment method based on the recursive neural network model that uses ontology instances to learn similarities between ontology concepts. Recursive neural networks are an extension of common neural networks, designed to process efficiently structured data. Since ontologies are a structured data representation, the model is inherently suitable for use with ontologies.", "Background Single cell RNA sequencing (scRNA-seq) is applied to assay the individual transcriptomes of large numbers of cells. The gene expression at single-cell level provides an opportunity for better understanding of cell function and new discoveries in biomedical areas. To ensure that the single-cell based gene expression data are interpreted appropriately, it is crucial to develop new computational methods.", "Ontology matching, the task of determining relations that hold among terms of two different ontologies, is a key issue in the Semantic Web and other related fields. In order to compare the behaviour of different ontology matching systems, the Ontology Alignment Evaluation Initiative (OAEI) has established a periodical controlled evaluation that comes in a yearly event. We present here our participation in the 2008 initiative. Our schema-based alignment algorithm compares each pair of ontology terms by, firstly, extracting their ontological contexts up to a certain depth (enriched by using transitive entailment) and, secondly, combining different elementary ontology matching techniques (e.g., lexical distances and vector space modelling). Benchmark results show a very good behaviour in terms of precision, while preserving an acceptable recall. Based on our experience, we have also included some remarks about the nature of benchmark test cases that, in our opinion, could help improving the OAEI tests in the future.", "" ] }
1907.11569
2965957874
Research on neural networks has gained significant momentum over the past few years. A plethora of neural networks is currently being trained on available data in research as well as in industry. Because training is a resource-intensive process and training data cannot always be made available to everyone, there has been a recent trend to attempt to re-use already-trained neural networks. As such, neural networks themselves have become research data. In this paper, we present the Neural Network Ontology, an ontology to make neural networks findable, accessible, interoperable and reusable as suggested by the well-established FAIR guiding principles for scientific data management and stewardship. We created the new FAIRnets Dataset that comprises about 2,000 neural networks openly accessible on the internet and uses the Neural Network Ontology to semantically annotate and represent the neural networks. For each of the neural networks in the FAIRnets Dataset, the relevant properties according to the Neural Network Ontology such as the description and the architecture are stored. Ultimately, the FAIRnets Dataset can be queried with a set of desired properties and responds with a set of neural networks that have these properties. We provide the service FAIRnets Search which is implemented on top of a SPARQL endpoint and allows for querying, searching and finding trained neural networks annotated with the Neural Network Ontology. The service is demonstrated by a browser-based frontend to the SPARQL endpoint.
The paper Model Cards for Model Reporting' @cite_20 suggests relevant infor- mation about neural networks that should be considered when saving information about them. Information such as description, date of the last modification, link to papers or other resources for further information, as well as the intended purpose of a neural network, are taken into account. Storing these information makes the neural networks more transparent.
{ "cite_N": [ "@cite_20" ], "mid": [ "2897042519" ], "abstract": [ "Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation." ] }
1907.11440
2965160406
Pooling is one of the main elements in convolutional neural networks. The pooling reduces the size of the feature map, enabling training and testing with a limited amount of computation. This paper proposes a new pooling method named universal pooling. Unlike the existing pooling methods such as average pooling, max pooling, and stride pooling with fixed pooling function, universal pooling generates any pooling function, depending on a given problem and dataset. Universal pooling was inspired by attention methods and can be considered as a channel-wise form of local spatial attention. Universal pooling is trained jointly with the main network and it is shown that it includes the existing pooling methods. Finally, when applied to two benchmark problems, the proposed method outperformed the existing pooling methods and performed with the expected diversity, adapting to the given problem.
Max pooling divides the feature map into blocks and collects the maximum feature value in each block into a smaller output matrix. Max pooling is commonly used between convolution layers and is employed in AlexNet @cite_4 and VGG @cite_7 . Average pooling operates similarly to max pooling, but outputs the average of each block in the feature map. Global average pooling (GAP), which applies average pooling over the entire feature map, is commonly used in convolutional networks such as ResNet @cite_13 and DenseNet @cite_1 . In DenseNet, average pooling is applied between the convolution layers. Meanwhile, stride pooling is equivalent to importing values from a fixed position after convoluting across the entire area. This approach is adopted in ResNet.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_4", "@cite_7" ], "mid": [ "2194775991", "2963446712", "", "2962835968" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1907.11440
2965160406
Pooling is one of the main elements in convolutional neural networks. The pooling reduces the size of the feature map, enabling training and testing with a limited amount of computation. This paper proposes a new pooling method named universal pooling. Unlike the existing pooling methods such as average pooling, max pooling, and stride pooling with fixed pooling function, universal pooling generates any pooling function, depending on a given problem and dataset. Universal pooling was inspired by attention methods and can be considered as a channel-wise form of local spatial attention. Universal pooling is trained jointly with the main network and it is shown that it includes the existing pooling methods. Finally, when applied to two benchmark problems, the proposed method outperformed the existing pooling methods and performed with the expected diversity, adapting to the given problem.
All of these pooling methods are efficient but simple, and it seems that there is a room to improve the performance. S3pool @cite_6 and stochastic pooling @cite_12 adopt a probability-based pooling approach. @math pooling of various coefficients' norms was proposed in @cite_0 and @cite_3 , and a fractional version of max pooling was proposed in @cite_9 . The spectral space was down-sampled through a filter in @cite_2 and @cite_10 . In @cite_5 , the existing simple pooling methods were combined to improve the pooling performance. Detail-preserving pooling, which preserves the feature details by applying existing down-sampling techniques in the image processing area and by learning the parameters of the function over the network, was proposed in @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2964012402", "1995159036", "", "2559156603", "2949366180", "1839118408", "2963919294", "2901278454", "2963574257" ], "abstract": [ "Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network's discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detail-preserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches.", "OBJECT Bone allografts used for interbody spinal fusion are often preserved through either freeze drying or lowtemperature freezing, each having disadvantages related to graft preparation time and material properties. In response, a glycerol preservation treatment has been developed to maintain the biomechanical properties of allografts at ambient temperatures, requiring no thawing or rehydration and minimal rinsing prior to implantation. The authors conducted a prospective randomized study to compare the clinical results of glycerol-preserved Cloward dowels and those of freezedried Cloward dowels in anterior cervical discectomy and fusion. The primary outcome measures were evidence of fusion and graft subsidence, and the secondary outcome measures included adverse events, pain, and neck disability scores. METHODS Of 106 patients, 53 (113 levels of surgery) were randomly assigned to the glycerol-preserved graft group and 53 (114 levels of surgery) to the freeze-dried graft group. Subsidence was assessed a...", "", "Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two step procedure: first, a pooling window (e.g., 2× 2) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for learning (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models.", "In this work we compute lower Lipschitz bounds of @math pooling operators for @math as well as @math pooling operators preceded by half-rectification layers. These give sufficient conditions for the design of invertible neural network layers. Numerical experiments on MNIST and image patches confirm that pooling layers can be inverted with phase recovery algorithms. Moreover, the regularity of the inverse pooling, controlled by the lower Lipschitz constant, is empirically verified with a nearest neighbor regression.", "Discrete Fourier transforms provide a significant speedup in the computation of convolutions in deep learning. In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs). We employ spectral representations to introduce a number of innovations to CNN design. First, we propose spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain. This approach preserves considerably more information per parameter than other pooling strategies and enables flexibility in the choice of pooling output dimensionality. This representation also enables a new form of stochastic regularization by randomized modification of resolution. We show that these methods achieve competitive results on classification and approximation tasks, without using any dropout or max-pooling. Finally, we demonstrate the effectiveness of complex-coefficient spectral parameterization of convolutional filters. While this leaves the underlying model unchanged, it results in a representation that greatly facilitates optimization. We observe on a variety of popular CNN configurations that this leads to significantly faster convergence during training.", "We seek to improve deep neural networks by generalizing the pooling operations that play a central role in current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures. These benefits come with only a light increase in computational overhead during training and a very modest increase in the number of model parameters.", "This paper presents a novel Inter Catchment Wastewater Transfer (ICWT) method for mitigating sewer overflow. The ICWT aims at balancing the spatial mismatch of sewer flow and treatment capacity of Wastewater Treatment Plant (WWTP), through collaborative operation of sewer system facilities. Using a hydraulic model, the effectiveness of ICWT is investigated in a sewer system in Drammen, Norway. Concerning the whole system performance, we found that the S ren Lemmich pump station plays a vital role in the ICWT framework. To enhance the operation of this pump station, it is imperative to construct a multi-step ahead water level prediction model. Hence, one of the most promising artificial intelligence techniques, Long Short Term Memory (LSTM), is employed to undertake this task. Experiments demonstrated that LSTM is superior to Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), Feed-forward Neural Network (FFNN) and Support Vector Regression (SVR).", "Abstract: We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation." ] }
1907.11307
2966138214
Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradient and ADAM, have been widely used for building deep learning models because of their faster convergence rates compared to stochastic gradient descent (SGD). Momentum is a method that helps accelerate SGD in the relevant directions in variable updating, which can minify the oscillations of variables update route. Optimization algorithms with momentum usually allocate a fixed hyperparameter (e.g., ) as the weight of the momentum term. However, using a fixed weight is not applicable to some situations, and such a hyper-parameter can be extremely hard to tune in applications. In this paper, we will introduce a new optimization algorithm, namely DEAM (Discriminative wEight on Accumulated Momentum). Instead of assigning the momentum term with a fixed weight, DEAM proposes to compute the momentum weight in the learning process automatically. DEAM also involves a "backtrack" term, which can help accelerate the algorithm convergence by restricting redundant updates. Extensive experiments have been done on several real-world datasets. The experimental results demonstrate that DEAM can achieve a faster convergence rate than the existing optimization algorithms in training both the classic machine learning models and the recent deep learning models.
: @cite_21 @cite_0 is proposed based on SGD and momentum concept, and it also computes individual adaptive learning rates for different variables. The variable updating rules in can be represented by the following equations: . records the first-order momentum @math and the second-order momentum @math of the gradients using the moving average (controlled by the parameters @math and @math , respectively), and further computes the bias-corrected version of them ( @math and @math ). Based on , @cite_17 proposes to switch from to SGD during the training process. In this way, it can combine the advantages of both SGD and .
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_17" ], "mid": [ "", "2964121744", "2776855315" ], "abstract": [ "", "Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.", "Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training. We investigate a hybrid strategy that begins training with an adaptive method and switches to SGD when appropriate. Concretely, we propose SWATS, a simple strategy which switches from Adam to SGD when a triggering condition is satisfied. The condition we propose relates to the projection of Adam steps on the gradient subspace. By design, the monitoring process for this condition adds very little overhead and does not increase the number of hyperparameters in the optimizer. We report experiments on several standard benchmarks such as: ResNet, SENet, DenseNet and PyramidNet for the CIFAR-10 and CIFAR-100 data sets, ResNet on the tiny-ImageNet data set and language modeling with recurrent networks on the PTB and WT2 data sets. The results show that our strategy is capable of closing the generalization gap between SGD and Adam on a majority of the tasks." ] }
1907.11307
2966138214
Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradient and ADAM, have been widely used for building deep learning models because of their faster convergence rates compared to stochastic gradient descent (SGD). Momentum is a method that helps accelerate SGD in the relevant directions in variable updating, which can minify the oscillations of variables update route. Optimization algorithms with momentum usually allocate a fixed hyperparameter (e.g., ) as the weight of the momentum term. However, using a fixed weight is not applicable to some situations, and such a hyper-parameter can be extremely hard to tune in applications. In this paper, we will introduce a new optimization algorithm, namely DEAM (Discriminative wEight on Accumulated Momentum). Instead of assigning the momentum term with a fixed weight, DEAM proposes to compute the momentum weight in the learning process automatically. DEAM also involves a "backtrack" term, which can help accelerate the algorithm convergence by restricting redundant updates. Extensive experiments have been done on several real-world datasets. The experimental results demonstrate that DEAM can achieve a faster convergence rate than the existing optimization algorithms in training both the classic machine learning models and the recent deep learning models.
: @cite_7 is a modified version of . changes the definition of second-order momentum by @math , and other settings are almost the same as . What's more, applies a varied learning rate @math comparing to , but the definition of @math is not specified.
{ "cite_N": [ "@cite_7" ], "mid": [ "2785523195" ], "abstract": [ "Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam, etc are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. It has been empirically observed that sometimes these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues may be fixed by endowing such algorithms with \"long-term memory\" of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance." ] }
1907.11321
2966850631
In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications. In this note, we introduce Probabilistic Approximate Logic (PALO) as a logic based on the notion of mean approximate probability to overcome conceptual and computational difficulties inherent to strictly probabilistic logics. The logic is approximate in several dimensions. Logical independence assumptions are used to obtain approximate probabilities, but by averaging over many instances of formulas a useful estimate of mean probability with known confidence can usually be obtained. To enable efficient computational inference, the logic has a continuous semantics that reflects only a subset of the structural properties of classical logic, but this imprecision can be partly compensated by richer theories obtained by classical inference or other means. Computational inference, which refers to the construction of models and validation of logical properties, is based on Stochastic Gradient Descent (SGD) and Markov Chain Monte Carlo (MCMC) techniques and hence another dimension where approximations are involved. We also present the Logical Imagination Engine (LIME), a prototypical implementation of PALO based on TensorFlow. Albeit not limited to the biological domain, we illustrate its operation in a quite substantial bioinformatics machine learning application concerned with network synthesis and analysis in a recent DARPA project.
It is noteworthy that our approach of combining selected operators from Hajek's Product logic, ukasiewicz logic, and G "odel logic in a non-standard fashion needs to be differentiated from work in the area of fuzzy logics, which is not aiming at a probabilistic interpretation but an orthogonal notion of truthiness (see also @cite_11 for his population-based interpretation of Fuzzy Logic). For example, @cite_38 investigates a propositional fuzzy logic that contains Product logic, ukasiewicz logic, and G "odel logic as sublogics and the focus is on identifying a suitable axiomatization and a class of models so that soundness and completeness can be established. In contrast, our approach with PALO is purely semantic and motivated by computational feasibility. We do not attempt to establish an axiomatic system for symbolic inference in soft logic, but rather maintain a connection to classical logic for which symbolic methods and technologies are well developed.
{ "cite_N": [ "@cite_38", "@cite_11" ], "mid": [ "2053035915", "2039204373" ], "abstract": [ "In this paper we investigate a propositional fuzzy logical system L? which contains the well-known Lukasiewicz, Product and Godel fuzzy logics as sublogics. We define the corresponding algebraic structures, called L?-algebras and prove the following completeness result: a formula f is provable in the L? logic iff it is a tautology for all linear L?-algebras. Moreover, linear L?-algebras are shown to be embeddable in linearly ordered abelian rings with a strong unit and cancellation law.", "Probability theory and fuzzy logic have been presented as quite distinct theoretical foundations for reasoning and decision making in situations of uncertainty. This paper establishes a common basis for both forms of logic of uncertainty in which a basic uncertainty logic is defined in terms of a valuation on a lattice of propositions. The (non-truth-functional) connectives for conjunction, disjunction, equivalence, implication, and negation are defined in terms which closely resemble those of probability theory. Addition of the axiom of the excluded middle to the basic logic gives a standard probability logic. Alternatively, addition of a requirement for strong truth-functionality (truth-value of connective determined by truth-value of constituents) gives a fuzzy logic with connectives, including implication, as in Lukasiewicz' infinitely valued logic. A common semantics for all such variants is given in terms of binary responses from a population. The type of population, e.g., physical events, people, or neurons, determines whether the model is of physical probability, subjective belief, or human decision-making. The formal theory and the semantics together illustrate clearly the precise similarities and differences between fuzzy and probability logics." ] }
1907.11321
2966850631
In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications. In this note, we introduce Probabilistic Approximate Logic (PALO) as a logic based on the notion of mean approximate probability to overcome conceptual and computational difficulties inherent to strictly probabilistic logics. The logic is approximate in several dimensions. Logical independence assumptions are used to obtain approximate probabilities, but by averaging over many instances of formulas a useful estimate of mean probability with known confidence can usually be obtained. To enable efficient computational inference, the logic has a continuous semantics that reflects only a subset of the structural properties of classical logic, but this imprecision can be partly compensated by richer theories obtained by classical inference or other means. Computational inference, which refers to the construction of models and validation of logical properties, is based on Stochastic Gradient Descent (SGD) and Markov Chain Monte Carlo (MCMC) techniques and hence another dimension where approximations are involved. We also present the Logical Imagination Engine (LIME), a prototypical implementation of PALO based on TensorFlow. Albeit not limited to the biological domain, we illustrate its operation in a quite substantial bioinformatics machine learning application concerned with network synthesis and analysis in a recent DARPA project.
@cite_14 is another approach to overcome the fact that a probabilistic interpretation of formulas is not truth-functional by using a less abstract semantics that interprets each formula as the set of assignments for which it holds so that conjunction becomes a simple intersection. Although this is an elegant solution, with our mean probability semantics that includes lower and upper bounds, it turns out that the bounds are sufficiently tight so that replacing our approximate by a strict probabilistic interpretation is unnecessary for the data-rich applications we are targeting. Two other practical difficulties with an exact probabilistic semantics are that dependencies between subformulas referring to external data are often unknown and even if all known dependencies would be taken into account it would lead to an unacceptably high computational complexity in the context of model generation and learning.
{ "cite_N": [ "@cite_14" ], "mid": [ "1554205565" ], "abstract": [ "Mechanisms for the automation of uncertainty are required for expert systems. Sometimes these mechanisms need to obey the properties of probabilistic reasoning. We argue that a purely numeric mechanism, like those proposed so far, cannot provide a probabilistic logic with truth functional connectives. We propose an alternative mechanism, Incidence Calculus, which is based on a representation of uncertainty using sets of points, which might represent situations models or possible worlds. Incidence Calculus does provide a probabilistic logic with truth functional connectives." ] }
1907.11565
2965034157
When describing images with natural language, the descriptions can be made more informative if tuned using downstream tasks. This is often achieved by training two networks: a "speaker network" that generates sentences given an image, and a "listener network" that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate to achieve a joint task, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. We describe an approach that addresses both challenges. We first develop a new effective optimization based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the standard COCO benchmark show that PSST Multinomial dramatically improve the recall@10 from 60 to 86 maintaining comparable language naturalness, and human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Image captioning has been studied intensively since encoder-decoder models were introduced . Large efforts have been invested in making captions more natural and diverse. For example, @cite_11 used conditional GANs to train a caption generator to improve fidelity, naturalness, and diversity. Using GANs allows avoiding the hard challenge of defining explicit language naturalness loss. Instead, the discriminator can receive fake or incorrect captions or images as negatives. @cite_7 used a conditional GAN with two discriminators, a CNN and an RNN. @cite_6 further used a hierarchical compositional model over captions to increase diversity and naturalness. More related to the optimization techniques discussed in this paper, @cite_3 trained an adversarial network using a straight-through Gumbel approach. As we discuss below, training cooperative agents allows using more effective optimization techniques compared to training GANs, because the generator is allowed to provide any useful information to the (cooperative) discriminator. Specifically, during training, the speaker can represent generated captions differently than human captions.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_7", "@cite_11" ], "mid": [ "2604178507", "2963299217", "2963747812", "2962968835" ], "abstract": [ "While strong progress has been made in image captioning recently, machine and human captions are still quite distinct. This is primarily due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans – rightfully so – generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not explicitly considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing ground-truth captions to generating a set of captions that is indistinguishable from human written captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions that are significantly less biased and better match the global uni-, bi- and tri-gram distributions of the human captions.", "Mainstream captioning models often follow a sequential structure to generate cap- tions, leading to issues such as introduction of irrelevant semantics, lack of diversity in the generated captions, and inadequate generalization performance. In this paper, we present an alternative paradigm for image captioning, which factorizes the captioning procedure into two stages: (1) extracting an explicit semantic representation from the given image; and (2) constructing the caption based on a recursive compositional procedure in a bottom-up manner. Compared to conventional ones, our paradigm better preserves the semantic content through an explicit factorization of semantics and syntax. By using the compositional generation procedure, caption construction follows a recursive structure, which naturally fits the properties of human language. Moreover, the proposed compositional procedure requires less data to train, generalizes better, and yields more diverse captions.", "", "Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the “ground-truth” captions, while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity – two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks." ] }
1907.11565
2965034157
When describing images with natural language, the descriptions can be made more informative if tuned using downstream tasks. This is often achieved by training two networks: a "speaker network" that generates sentences given an image, and a "listener network" that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate to achieve a joint task, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. We describe an approach that addresses both challenges. We first develop a new effective optimization based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the standard COCO benchmark show that PSST Multinomial dramatically improve the recall@10 from 60 to 86 maintaining comparable language naturalness, and human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Beyond the naturalness of communication, several studies looked into the problem of generating captions that allow discriminating an image from other similar images. @cite_18 showed how captions can take into account a distractor image at inference time and create a caption that discriminates a target image from a distractor image. A similar approach was taken earlier by @cite_0 . @cite_16 recently described a dataset that contains pairs of closely similar images, that can be used as hard-negatives for evaluating image retrieval tasks.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_16" ], "mid": [ "2964183327", "1861492603", "2914543347" ], "abstract": [ "", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "Providing systems the ability to relate linguistic and visual content is one of the hallmarks of computer vision. Tasks such as image captioning and retrieval were designed to test this ability, but come with complex evaluation measures that gauge various other abilities and biases simultaneously. This paper presents an alternative evaluation task for visual-grounding systems: given a caption the system is asked to select the image that best matches the caption from a pair of semantically similar images. The system's accuracy on this Binary Image SelectiON (BISON) task is not only interpretable, but also measures the ability to relate fine-grained text content in the caption to visual content in the images. We gathered a BISON dataset that complements the COCO Captions dataset and used this dataset in auxiliary evaluations of captioning and caption-based retrieval systems. While captioning measures suggest visual grounding systems outperform humans, BISON shows that these systems are still far away from human performance." ] }
1907.11565
2965034157
When describing images with natural language, the descriptions can be made more informative if tuned using downstream tasks. This is often achieved by training two networks: a "speaker network" that generates sentences given an image, and a "listener network" that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate to achieve a joint task, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. We describe an approach that addresses both challenges. We first develop a new effective optimization based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the standard COCO benchmark show that PSST Multinomial dramatically improve the recall@10 from 60 to 86 maintaining comparable language naturalness, and human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Several authors studied the properties of languages that are learned when agents communicate in visual tasks, @cite_4 @cite_13 @cite_2 @cite_15 . The current paper purposefully focuses on keeping the language close to natural, rather than study properties or emergent language.
{ "cite_N": [ "@cite_13", "@cite_15", "@cite_4", "@cite_2" ], "mid": [ "2953189990", "2950472486", "2730230212", "2963166531" ], "abstract": [ "There is growing interest in the language developed by agents interacting in emergent-communication settings. Earlier studies have focused on the agents' symbol usage, rather than on their representation of visual input. In this paper, we consider the referential games of (2017) and investigate the representations the agents develop during their evolving interaction. We find that the agents establish successful communication by inducing visual representations that almost perfectly align with each other, but, surprisingly, do not capture the conceptual properties of the objects depicted in the input images. We conclude that, if we are interested in developing language-like communication systems, we must pay more attention to the visual semantics agents associate to the symbols they use.", "The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are interested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communication. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message from a fixed, arbitrary vocabulary to the receiver. The receiver must rely on this message to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore how to make changes to the game environment to cause the \"word meanings\" induced in the game to better reflect intuitive semantic properties of the images. In addition, we present a simple strategy for grounding the agents' code into natural language. Both of these are necessary steps towards developing machines that are able to communicate with humans productively.", "A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision! In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of 'negative' results culminating in a 'positive' one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge 'naturally', despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.", "While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans. In this work, we propose a communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. We find that the ability to understand and translate a foreign language emerges as a means to achieve shared goals. The emergent translation is interactive and multimodal, and crucially does not require parallel corpora, but only monolingual, independent text and corresponding images. Our proposed translation model achieves this by grounding the source and target languages into a shared visual modality, and outperforms several baselines on both word-level and sentence-level translation tasks. Furthermore, we show that agents in a multilingual community learn to translate better and faster than in a bilingual communication setting." ] }
1907.11065
2963744496
Variants dropout methods have been designed for the fully-connected layer, convolutional layer and recurrent layer in neural networks, and shown to be effective to avoid overfitting. As an appealing alternative to recurrent and convolutional layers, the fully-connected self-attention layer surprisingly lacks a specific dropout method. This paper explores the possibility of regularizing the attention weights in Transformers to prevent different contextualized feature vectors from co-adaption. Experiments on a wide range of tasks show that DropAttention can improve performance and reduce overfitting.
We present a summary of existing models by highlighting differences among , and as shown in Table . The original idea of Dropout is proposed by @cite_26 for fully-connected networks, which is regarded as an effective regularization method. After that, many dropout techniques for specific network architectures, such as CNNs and RNNs, have been proposed. For CNNs, most successful methods require the noise to be structured @cite_18 @cite_15 @cite_22 @cite_17 @cite_25 @cite_24 @cite_12 . For example, SpatialDropout @cite_7 is used to address the spatial correlation problem. DropConnect @cite_5 sets a randomly selected subset of weights within the network to zero. For RNNs, Variational Dropout @cite_28 and ZoneOut @cite_23 are most widely used methods. In Variational Dropout, dropout rate is learned and the same neurons are dropped at every timestep. In ZoneOut, it stochastically forces some hidden units to maintain their previous values instead of dropping. Different from these methods, in this paper, we explore how to drop information on self-attention layers.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_7", "@cite_28", "@cite_24", "@cite_23", "@cite_5", "@cite_15", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "1936750108", "2095705004", "2408279554", "2120615054", "2963266340", "", "2409027918", "4919037", "2331143823", "2890166761", "2903105043", "" ], "abstract": [ "Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient ‘position refinement’ model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.", "", "We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.", "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10).", "Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that neurons in a contiguous region in convolutional layers are strongly correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where neurons in a contiguous region of a feature map are dropped together. Extensive experiments show that DropBlock works much better than dropout in regularizing convolutional networks. On ImageNet, DropBlock with ResNet-50 architecture achieves 77.65 accuracy, which is more than 1 improvement on the previous result of this architecture.", "Overfitting is a crucial problem in deep neural networks, even in the latest network architectures. In this paper, so as to relieve the overfitting effect of ResNet and its improvements (i.e., Wide ResNet, PyramidNet and ResNeXt), we propose a new regularization method, named ShakeDrop regularization. ShakeDrop is inspired by Shake-Shake, which is an effective regularization method but can be applied to only ResNeXt. ShakeDrop is even more effective than Shake-Shake and can be successfully applied to not only ResNeXt but also ResNet, Wide ResNet and PyramidNet. The important key to realize ShakeDrop is stability of training. Since effective regularization often causes unstable training, we introduce a stabilizer of training which is an unusual usage of an existing regularizer. Experiments reveals that ShakeDrop achieves comparable or superior generalization performance to conventional methods.", "" ] }
1907.11035
2963146525
Robotic grasping in cluttered environments is often infeasible due to obstacles preventing possible grasps. Then, pre-grasping manipulation like shifting or pushing an object becomes necessary. We developed an algorithm that can learn, in addition to grasping, to shift objects in such a way that their grasp probability increases. Our research contribution is threefold: First, we present an algorithm for learning the optimal pose of manipulation primitives like clamping or shifting. Second, we learn non-prehensible actions that explicitly increase the grasping probability. Making one skill (shifting) directly dependent on another (grasping) removes the need of sparse rewards, leading to more data-efficient learning. Third, we apply a real-world solution to the industrial task of bin picking, resulting in the ability to empty bins completely. The system is trained in a self-supervised manner with around 25000 grasp and 2500 shift actions. Our robot is able to grasp and file objects with 274 picks per hour. Furthermore, we demonstrate the system's ability to generalize to novel objects.
Object manipulation and in particular grasping are well-researched fields within robotics. @cite_4 differentiate between analytical and data-driven approaches to grasping. Historically, grasp synthesis was based on analytical constructions of force-closure grasps @cite_8 . In comparison, data-driven approaches are defined by sampling and ranking possible grasps. Popular ranking functions include classical mechanics and model-based grasp metrics @cite_15 @cite_12 . As modeling grasps itself is challenging, even more complex interactions like motion planning of pre-grasping actions were studied less frequently. Within this scope, Dogar et Srinivasa @cite_1 combined pushing and grasping into a single action, enabling them to grasp more cluttered objects from a table. @cite_2 presented a method for rotating objects to find more robust grasps for transport tasks.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_2", "@cite_15", "@cite_12" ], "mid": [ "2950303304", "1794703952", "2128082316", "1977497512", "1510186039", "" ], "abstract": [ "", "The authors address the problem of planning optimal grasps. Two general optimality criteria that consider the total finger force and the maximum finger force are introduced and discussed. Their formalization using various metrics on a space of generalized forces is detailed. The geometric interpretation of the two criteria leads to an efficient planning algorithm. An example of its use in a robotic environment equipped with two-jaw and three-jaw is described. >", "We add to a manipulator's capabilities a new primitive motion which we term a push-grasp. While significant progress has been made in robotic grasping of objects and geometric path planning for manipulation, such work treats the world and the object being grasped as immovable, often declaring failure when simple motions of the object could produce success. We analyze the mechanics of push-grasping and present a quasi-static tool that can be used both for analysis and simulation. We utilize this analysis to derive a fast, feasible motion planning algorithm that produces stable pushgrasp plans for dexterous hands in the presence of object pose uncertainty and high clutter. We demonstrate our algorithm extensively in simulation and on HERB, a personal robotics platform developed at Intel Labs Pittsburgh.", "Studies of human manipulation strategies suggest that pre-grasp object manipulation, such as rotation or sliding of the object to be grasped, can improve task performance by increasing both the task success rate and the quality of load-supporting postures. In previous demonstrations, pre-grasp object rotation by a robot manipulator was limited to manually-programmed actions. We present a method for automating the planning of pre-grasp rotation for object transport tasks. Our technique optimizes the grasp acquisition point by selecting a target object pose that can be grasped by high-payload manipulator configurations. Careful selection of the transition states leads to successful transport plans for tasks that are otherwise infeasible. In addition, optimization of the grasp acquisition posture also indirectly improves the transport plan quality, as measured by the safety margin of the manipulator payload limits.", "A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided.", "" ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
It is known that in general not every scaling-based algorithm can be made strongly polynomial, see, e.g., Hochbaum's work on the allocation problem @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1989262026" ], "abstract": [ "We demonstrate the impossibility of strongly polynomial algorithms for the allocation problem, in the comparison model and in the algebraic tree computation model, that follow from lower bound results. Consequently, there are no strongly polynomial algorithms for nonlinear (concave) separable optimization over a totally unimodular constraint matrix. This is in contrast to the case when the objective is linear. We present scaling-based algorithms that use a greedy algorithm as a subroutine. The algorithms are polynomial for the allocation problem and its extensions and are also optimal for the sample allocation problem and the generalized upper bounds allocation problem, in that the complexity meets the lower bound derived from the comparison model. For other extensions of the allocation problem the scaling-based algorithms presented here are the fastest known. These algorithms are also polynomial time algorithms for solving with e accuracy the allocation problem and its extension in continuous variables." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
For undirected graphs with weights in @math , APSP can be solved exactly in time @math @cite_18 @cite_33 @cite_1 @cite_35 , where @math is the matrix multiplication exponent @cite_47 . For directed graphs with weights in @math , presented an @math -time algorithm that also uses fast matrix multiplication (in fact, recent advances for rectangular matrix multiplication yield slightly stronger bounds @cite_49 @cite_39 ).
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_1", "@cite_39", "@cite_49", "@cite_47" ], "mid": [ "1823654214", "2066582699", "1970052762", "2618199688", "", "2114177725", "2120248756" ], "abstract": [ "We show that the all pairs shortest paths (APSP) problem for undirected graphs with integer edge weights taken from the range 1, 2, ..., M can be solved using only a logarithmic number of distance products of matrices with elements in the range (1, 2, ..., M). As a result, we get an algorithm for the APSP problem in such graphs that runs in O (Mn sup spl omega ) time, where n is the number of vertices in the input graph, M is the largest edge weight in the graph, and spl omega <2.376 is the exponent of matrix multiplication. This improves, and also simplifies, an O (M sup ( spl omega +1) 2 n sup spl omega ) time algorithm of Galil and Margalit (1997).", "", "The authors have solved the all pairs shortest distances (APSD) problem for graphs with integer edge lengths. Our algorithm is subcubic for edge lengths of small (?M) absolute value. In this paper we show how to transform these algorithms to solve the all pairs shortest paths (APSP), in the same time complexity, up to a polylogarithmic factor. Forn=|V| the number of vertices,Mthe bound on edge length, and?the exponent of matrix multiplication, we get the following results: 1. A directed nonnegative APSP(n, M) algorithm which runs inO(T(n, M)) time, where T(n, m)= 2. An undirected APSP(n, M) algorithm which runs inO(M(?+1) 2n?log(Mn)) time. 3. A general APSP(n, M) algorithm which runs inO((Mn)(3+?) 2).", "The upper bound on the exponent,?, of matrix multiplication over a ring that was three in 1968 has decreased several times and since 1986 it has been 2.376. On the other hand, the exponent of the algorithms known for the all pairs shortest path problem has stayed at three all these years even for the very special case of directed graphs with uniform edge lengths. In this paper we give an algorithm of timeO(n?log3n),?=(3+?) 2, for the case of edge lengths in ?1, 0, 1 . Thus, for the current known bound on?, we get a bound on the exponent,?<2.688. In case of integer edge lengths with absolute value bounded above byM, the time bound isO((Mn)?log3n) and the exponent is less than 3 forM=O(n?), for?<0.116 and the current bound on?.", "", "Let @math be the maximal value such that the product of an @math matrix by an @math matrix can be computed with @math arithmetic operations. In this paper we show that @math , which improves the previous record @math by Coppersmith (Journal of Complexity, 1997). More generally, we construct a new algorithm for multiplying an @math matrix by an @math matrix, for any value @math . The complexity of this algorithm is better than all known algorithms for rectangular matrix multiplication. In the case of square matrix multiplication (i.e., for @math ), we recover exactly the complexity of the algorithm by Coppersmith and Wino grad (Journal of Symbolic Computation, 1990). These new upper bounds can be used to improve the time complexity of several known algorithms that rely on rectangular matrix multiplication. For example, we directly obtain a @math -time algorithm for the all-pairs shortest paths problem over directed graphs with small integer weights, where @math denotes the number of vertices, and also improve the time complexity of sparse square matrix multiplication.", "This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic construction also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication. Compared with existing approaches, this method is based on convex optimization, and thus has polynomial-time complexity. As an application, we use this method to study powers of the construction given by Coppersmith and Winograd [Journal of Symbolic Computation, 1990] and obtain the upper bound ω" ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
For approximate APSP for real-valued graphs with weights in @math , presented an additive @math -approximation in time @math . More recently, among other results, gave an algorithm computing every distance @math up to an additive error of @math in time @math . For very small @math , this interpolates between Zwick's fastest exact algorithm and his approximation algorithm @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2049500052" ], "abstract": [ "We present two new algorithms for solving the All Pairs Shortest Paths (APSP) problem for weighted directed graphs. Both algorithms use fast matrix multiplication algorithms.The first algorithm solves the APSP problem for weighted directed graphs in which the edge weights are integers of small absolute value in O(n2+μ) time, where μ satisfies the equation ω(1, μ, 1) = 1 + 2μ and ω(1, μ, 1) is the exponent of the multiplication of an n × nμ matrix by an nμ × n matrix. Currently, the best available bounds on ω(1, μ, 1), obtained by Coppersmith, imply that μ 0 is an error parameter and W is the largest edge weight in the graph, after the edge weights are scaled so that the smallest non-zero edge weight in the graph is 1. It returns estimates of all the distances in the graph with a stretch of at most 1 + e. Corresponding paths can also be found efficiently." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
In this paper we will focus on the problem of @math -approximating APSP when @math is close to @math . For @math the problem is at least as hard as Boolean matrix multiplication @cite_7 and thus requires time @math . However, there are more efficient algorithms in the regime @math for undirected graphs, using and @cite_42 .
{ "cite_N": [ "@cite_42", "@cite_7" ], "mid": [ "2045446569", "2083534148" ], "abstract": [ "Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.", "Let G=(V,E) be a weighted undirected graph. A path between u,v?V is said to be of stretch t if its length is at most t times the distance between u and v in the graph. We consider the problem of finding small-stretch paths between all pairs of vertices in the graph G.It is easy to see that finding paths of stretch less than 2 between all pairs of vertices in an undirected graph with n vertices is at least as hard as the Boolean multiplication of two n×n matrices. We describe three algorithms for finding small-stretch paths between all pairs of vertices in a weighted graph with n vertices and m edges. The first algorithm, STRETCH2, runs in O(n3 2m1 2) time and finds stretch 2 paths. The second algorithm, STRETCH7 3, runs in O(n7 3) time and finds stretch 7 3 paths. Finally, the third algorithm, STRETCH3, runs in O(n2) and finds stretch 3 paths.Our algorithms are simpler, more efficient and more accurate than the previously best algorithms for finding small-stretch paths. Unlike all previous algorithms, our algorithms are not based on the construction of sparse spanners or sparse neighborhood covers." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
APSP and APBP can be easily computed in time @math on quantum computers @cite_40 . designed the first quantum algorithm for computing in time @math , and noted that every problem equivalent to APBP admits a nontrivial @math -time algorithm in the quantum realm.
{ "cite_N": [ "@cite_40" ], "mid": [ "1554473710" ], "abstract": [ "We consider the quantum time complexity of the all pairs shortest paths (APSP) problem and some of its variants. The trivial classical algorithm for APSP and most all pairs path problems runs in @math time, while the trivial algorithm in the quantum setting runs in @math time, using Grover search. A major open problem in classical algorithms is to obtain a truly subcubic time algorithm for APSP, i.e. an algorithm running in @math time for constant @math . To approach this problem, many truly subcubic time classical algorithms have been devised for APSP and its variants for structured inputs. Some examples of such problems are APSP in geometrically weighted graphs, graphs with small integer edge weights or a small number of weights incident to each vertex, and the all pairs earliest arrivals problem. In this paper we revisit these problems in the quantum setting and obtain the first nontrivial (i.e. @math time) quantum algorithms for the problems." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
It is also worth mentioning that there are efficient algorithms for products in other algebraic structures, e.g., dominance product, @math -product, @math -product (see, e.g., @cite_31 ).
{ "cite_N": [ "@cite_31" ], "mid": [ "2268320119" ], "abstract": [ "Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the state-of-the-art algorithms are the best possible. A notable example of this phenomenon is the all pairs shortest paths problem in a directed graph with real edge weights. The best algorithm (modulo small polylogarithmic improvements) for this problem runs in cubic time, a running time known since the 1960s (by Floyd and Warshall). Our grasp of many such fundamental algorithmic questions is far from optimal, and the major goal of this thesis is to bring some new insights into efficiently solving path problems in graphs. We focus on several path problems optimizing different measures: shortest paths, maximum bottleneck paths, minimum nondecreasing paths, and various extensions. For the all-pairs versions of these path problems we use an algebraic approach. We obtain improved algorithms using reductions to fast matrix multiplication. For maximum bottleneck paths and minimum nondecreasing paths we are the first to break the cubic barrier, obtaining truly subcubic strongly polynomial algorithms. We also consider a nonalgebraic, combinatorial approach, which is considered more efficient in practice compared to methods based on fast matrix multiplication. We present a combinatorial data structure that maintains a matrix so that products with given sparse vectors can be computed efficiently. This allows us to obtain good running times for path problems in unweighted sparse graphs. This thesis also gives algorithms for some single source path problems. We obtain the first linear time algorithm for the single source minimum nondecreasing paths problem. We give some extensions to this, including an algorithm to find cheapest minimum nondecreasing paths. Besides finding optimal paths, we consider the related problem of finding optimal cycles. In particular, we focus on the problem of finding in a weighted graph a triangle of maximum weight sum. We obtain the first truly subcubic algorithm for finding a maximum weight triangle in a node-weighted graph. We also present algorithms for the edge-weighted case. These algorithms immediately imply good algorithms for finding maximum weight k-cliques, or arbitrary maximum weight pattern subgraphs of fixed size." ] }
1907.10931
2963645312
Nonlinear image registration continues to be a fundamentally important tool in medical image analysis. Diagnostic tasks, image-guided surgery and radiotherapy as well as motion analysis all rely heavily on accurate intra-patient alignment. Furthermore, inter-patient registration enables atlas-based segmentation or landmark localisation and shape analysis. When labelled scans are scarce and anatomical differences large, conventional registration has often remained superior to deep learning methods that have so far mainly dealt with relatively small or low-complexity deformations. We address this shortcoming by leveraging ideas from probabilistic dense displacement optimisation that has excelled in many registration tasks with large deformations. We propose to design a network with approximate min-convolutions and mean field inference for differentiable displacement regularisation within a discrete weakly-supervised registration setting. By employing these meaningful and theoretically proven constraints, our learnable registration algorithm contains very few trainable weights (primarily for feature extraction) and is easier to train with few labelled scans. It is very fast in training and inference and achieves state-of-the-art accuracies for the challenging inter-patient registration of abdominal CT outperforming previous deep learning approaches by 15 Dice overlap.
* Contributions We propose a new learning model for DLIR that better leverages the advantages of probabilistic dense displacement sampling by introducing strong regularisation with differentiable constraints that explicitly considers the 6D nature of the problem. We hence decouple convolutional feature learning from the fitting of a spatial transformation using mean-field inference for regularisation @cite_16 @cite_6 and approximate min-convolutions @cite_17 for computing inter-label compatibilities. Our feature extractor uses 3D deformable convolutions @cite_13 and is very lightweight. To our knowledge this is the first approach that combines discrete DLIR with the differentiable use of mean-field regularisation. In contrast to previous work, our model requires fewer trainable weights, captures larger deformations and can be trained from few labelled scans to high accuracy. We also introduce a new non-local label loss for improved guidance instead of the more widely used spatial transformer based loss.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_6", "@cite_17" ], "mid": [ "2161236525", "2913629396", "2124592697", "2165949176" ], "abstract": [ "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "Abstract Deep networks have set the state-of-the-art in most image analysis tasks by replacing handcrafted features with learned convolution filters within end-to-end trainable architectures. Still, the specifications of a convolutional network are subject to much manual design – the shape and size of the receptive field for convolutional operations is a very sensitive part that has to be tuned for different image analysis applications. 3D fully-convolutional multi-scale architectures with skip-connection that excel at semantic segmentation and landmark localisation have huge memory requirements and rely on large annotated datasets - an important limitation for wider adaptation in medical image analysis. We propose a novel and effective method based on trainable 3D convolution kernels that learns both filter coefficients and spatial filter offsets in a continuous space based on the principle of differentiable image interpolation first introduced for spatial transformer network. A deep network that incorporates this one binary extremely large and inflecting sparse kernel (OBELISK) filter requires fewer trainable parameters and less memory while achieving high quality results compared to fully-convolutional U-Net architectures on two challenging 3D CT multi-organ segmentation tasks. Extensive validation experiments indicate that the performance of sparse deformable convolutions is due to their ability to capture large spatial context with few expressive filter parameters and that network depth is not always necessary to learn complex shape and appearance features. A combination with conventional CNNs further improves the delineation of small organs with large shape variations and the fast inference time using flexible image sampling may offer new potential use cases for deep networks in computer-assisted, image-guided interventions.", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.", "Markov random field models provide a robust and unified framework for early vision problems such as stereo and image restoration. Inference algorithms based on graph cuts and belief propagation have been found to yield accurate results, but despite recent advances are often too slow for practical use. In this paper we present some algorithmic techniques that substantially improve the running time of the loopy belief propagation approach. One of the techniques reduces the complexity of the inference algorithm to be linear rather than quadratic in the number of possible labels for each pixel, which is important for problems such as image restoration that have a large label set. Another technique speeds up and reduces the memory requirements of belief propagation on grid graphs. A third technique is a multi-grid method that makes it possible to obtain good results with a small fixed number of message passing iterations, independent of the size of the input images. Taken together these techniques speed up the standard algorithm by several orders of magnitude. In practice we obtain results that are as accurate as those of other global methods (e.g., using the Middlebury stereo benchmark) while being nearly as fast as purely local methods." ] }
1907.11117
2963229777
This work introduces verb-only representations for both recognition and retrieval of visual actions, in video. Current methods neglect legitimate semantic ambiguities between verbs, instead choosing unambiguous subsets of verbs along with objects to disambiguate the actions. We instead propose multiple verb-only labels, which we learn through hard or soft assignment as a regression. This enables learning a much larger vocabulary of verbs, including contextual overlaps of these verbs. We collect multi-verb annotations for three action video datasets and evaluate the verb-only labelling representations for action recognition and cross-modal retrieval (video-to-text and text-to-video). We demonstrate that multi-label verb-only representations outperform conventional single verb labels. We also explore other benefits of a multi-verb representation including cross-dataset retrieval and verb type manner and result verb types) retrieval.
Action Recognition in Videos Video Action Recognition datasets are commonly annotated with a reduced set of semantically distinct verb labels @cite_26 @cite_18 @cite_54 @cite_25 @cite_24 @cite_29 @cite_31 @cite_43 . Only in EPIC-Kitchens @cite_26 , verb labels are collected from narrations with an open vocabulary leading to overlapping labels, which are then manually clustered into unambiguous classes. Ambiguity and overlaps in verbs has been noted in @cite_51 @cite_48 . Our prior work @cite_51 uses the verb hierarchy in WordNet @cite_1 synsets to reduce ambiguity. We note how annotators were confused, and often could not distinguish between the different verb meanings. Khamis and Davis @cite_48 used multi-verb labels in action recognition, on a small set of (10) verbs. They jointly learn multi-label classification and label correlation, using a bi-linear approach, allowing an actor to be in a state of performing multiple actions such as walking and talking . This work is the closest to ours in motivation, however their approach uses hard assignment of verbs, and does not address single-verb ambiguity, assuming each verb to be non-ambiguous. Up to our knowledge, no other work has explored multi-label verb-only representations of actions in video.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_48", "@cite_54", "@cite_29", "@cite_1", "@cite_24", "@cite_43", "@cite_31", "@cite_51", "@cite_25" ], "mid": [ "2198667788", "2964242760", "1946832903", "105287674", "2949827582", "2121678312", "2625366777", "2337252826", "", "2496009737", "2212494831" ], "abstract": [ "We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .", "", "Action recognition is a fundamental problem in computer vision. However, all the current approaches pose the problem in a multi-class setting, where each actor is modeled as performing a single action at a time. In this work we pose the action recognition as a multi-label problem, i.e., an actor can be performing any plausible subset of actions. Determining which subsets of labels can co-occur is typically treated as a separate problem, typically modeled sparsely or fixed apriori to label correlation coefficients. In contrast, we formulate multi-label training and label correlation estimation as a joint max-margin bilinear classification problem. Our joint approach effectively trains discriminative bilinear classifiers that leverage label correlations. To evaluate our approach we relabeled the UCLA Courtyard dataset for the multi-label setting. We demonstrate that our joint model outperforms baselines on the same task and report state-of-the-art per-label accuracies on the dataset.", "", "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6 mAP, underscoring the need for developing new approaches for video understanding.", "The goal of this project is to provide lexical resources for natural language research. The primary emphases are on the further development and dissemination of the on-line lexical database, WordNet. A secondary goal is to learn how to develop contextual representations for different senses of a polysemous word, where a contextual representation is comprised of topical and local context for each sense.", "Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification. One obstacle that prevents networks from reasoning more deeply about complex scenes and situations, and from integrating visual knowledge with natural language, like humans do, is their lack of common sense knowledge about the physical world. Videos, unlike still images, contain a wealth of detailed information about the physical world. However, most labelled video datasets represent high-level concepts rather than detailed physical aspects about actions and scenes. In this work, we describe our ongoing collection of the “something-something” database of video prediction tasks whose solutions require a common sense understanding of the depicted situation. The database currently contains more than 100,000 videos across 174 classes, which are defined as caption-templates. We also describe the challenges in crowd-sourcing this data at scale.", "Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 s, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.", "", "We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5 .", "We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze." ] }
1907.11117
2963229777
This work introduces verb-only representations for both recognition and retrieval of visual actions, in video. Current methods neglect legitimate semantic ambiguities between verbs, instead choosing unambiguous subsets of verbs along with objects to disambiguate the actions. We instead propose multiple verb-only labels, which we learn through hard or soft assignment as a regression. This enables learning a much larger vocabulary of verbs, including contextual overlaps of these verbs. We collect multi-verb annotations for three action video datasets and evaluate the verb-only labelling representations for action recognition and cross-modal retrieval (video-to-text and text-to-video). We demonstrate that multi-label verb-only representations outperform conventional single verb labels. We also explore other benefits of a multi-verb representation including cross-dataset retrieval and verb type manner and result verb types) retrieval.
Action Retrieval Distinct from recognition, cross-modal retrieval approaches have been proposed for visual actions both in images @cite_6 @cite_21 @cite_30 and videos @cite_49 @cite_15 @cite_27 . These works focus on instance retrieval, given a caption can the corresponding video image be retrieved and vice versa. This is different from our attempt to retrieve similar actions rather than only the corresponding video caption. Only Hahn al @cite_44 train an embedding space for videos and verbs only, using word2vec as the target space. They use verbs from UCF101 @cite_52 and HMDB51 @cite_39 in addition to verb-noun classes from Kinetics @cite_40 . These are coarser actions ( diving vs. running ) and as such have little overlap allowing the target space to perform well on unseen actions.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_52", "@cite_6", "@cite_39", "@cite_44", "@cite_27", "@cite_40", "@cite_49", "@cite_15" ], "mid": [ "2405223529", "2963125676", "24089286", "2744926832", "2126579184", "2908138876", "877909479", "2619947201", "2890443664", "2142900973" ], "abstract": [ "Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90 improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45 improvement accordingly in mean average precision (mAP).", "Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes.", "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.", "Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.", "With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.", "We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.", "Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.", "We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.", "", "Despite a recent push towards large-scale object recognition, activity recognition remains limited to narrow domains and small vocabularies of actions. In this paper, we tackle the challenge of recognizing and describing activities in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, such as the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: it does not require training videos of the exact activity. If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from Web-scale natural language corpora to penalize unlikely combinations of actors actions objects, we also use a Web-scale language model to fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches." ] }
1907.10992
2963178965
This paper addresses the problem of enhancing underexposed photos. Existing methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they may fail to produce satisfactory results due to the presence of visual artifacts such as color distortion, loss of details and uneven exposure, etc. To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach in this paper. Our main observation is that, the reason why existing methods induce the artifacts is because they break a perceptual consistency between the input and the enhanced output. Based on this observation, an effective criterion, called perceptually bidirectional similarity (PBS) is proposed for preserving the perceptual consistency during enhancement. Particularly, we cast the underexposed photo enhancement as PBS-constrained illumination estimation optimization, where the PBS is defined as three constraints for estimating the illumination that can recover the enhancement results with normal exposure, distinct contrast, clear details and vivid color. To make our method more efficient and scalable to high-resolution images, we introduce a sampling-based strategy for accelerating the illumination estimation. Moreover, we extend our method to handle underexposed videos. Qualitative and quantitative comparisons as well as the user study demonstrate the superiority of our method over the state-of-the-art methods.
Mapping pixel intensities with sigmoid functions is another commonly-used way to enhance photos. A well-known representative is Gamma Correction, which expands the dynamic range via a power-law function. As globally applying sigmoid mapping may generate visually distorted results, existing methods usually perform locally adaptive mapping. For instance, Bennett and McMillan @cite_5 decomposed the input image into a base and detail layers, and applied different mappings for the two layers to preserve the image details, while Yuan and Sun @cite_20 segmented the image into subregions and computed luminance-aware detail-preserving mapping for each subregion. Zhang al @cite_15 created multiple tone mapped versions for the input image and fused them into a well-exposed image. Since finding locally optimal sigmoid mappings and ensuring globally smooth transition are difficult, these methods often fail for complex images.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "", "2343431701", "2165107586" ], "abstract": [ "", "Underexposed video enhancement aims at revealing hidden details that are barely noticeable in LDR video frames with noise. Previous work typically relies on a single heuristic tone mapping curve to expand the dynamic range, which inevitably leads to uneven exposure and visual artifacts. In this paper, we present a novel approach for underexposed video enhancement using an efficient perception-driven progressive fusion. For an input underexposed video, we first remap each video frame using a series of tentative tone mapping curves to generate an multi-exposure image sequence that contains different exposed versions of the original video frame. Guided by some visual perception quality measures encoding the desirable exposed appearance, we locate all the best exposed regions from multi-exposure image sequences and then integrate them into a well-exposed video in a temporally consistent manner. Finally, we further perform an effective texture-preserving spatio-temporal filtering on this well-exposed video to obtain a high-quality noise-free result. Experimental results have shown that the enhanced video exhibits uniform exposure, brings out noticeable details, preserves temporal coherence, and avoids visual artifacts. Besides, we demonstrate applications of our approach to a set of problems including video dehazing, video denoising and HDR video reconstruction.", "We study the problem of automatically correcting the exposure of an input image. Generic auto-exposure correction methods usually fail in individual over- under-exposed regions. Interactive corrections may fix this issue, but adjusting every photograph requires skill and time. This paper will automate the interactive correction technique by estimating the image specific S-shaped non-linear tone curve that best fits the input image. Our first contribution is a new Zone-based region-level optimal exposure evaluation, which would consider both the visibility of individual regions and relative contrast between regions. Then a detail-preserving S-curve adjustment is applied based on the optimal exposure to obtain the final output. We show that our approach enables better corrections comparing with popular image editing tools and other automatic methods." ] }
1907.10992
2963178965
This paper addresses the problem of enhancing underexposed photos. Existing methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they may fail to produce satisfactory results due to the presence of visual artifacts such as color distortion, loss of details and uneven exposure, etc. To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach in this paper. Our main observation is that, the reason why existing methods induce the artifacts is because they break a perceptual consistency between the input and the enhanced output. Based on this observation, an effective criterion, called perceptually bidirectional similarity (PBS) is proposed for preserving the perceptual consistency during enhancement. Particularly, we cast the underexposed photo enhancement as PBS-constrained illumination estimation optimization, where the PBS is defined as three constraints for estimating the illumination that can recover the enhancement results with normal exposure, distinct contrast, clear details and vivid color. To make our method more efficient and scalable to high-resolution images, we introduce a sampling-based strategy for accelerating the illumination estimation. Moreover, we extend our method to handle underexposed videos. Qualitative and quantitative comparisons as well as the user study demonstrate the superiority of our method over the state-of-the-art methods.
This kind of method is built upon the assumption that an underexposed image is the pixel-wise product of the expected enhancement result and a single-channel illumination map. In this fashion, the enhancement problem can be treated as an illumination estimation problem. Jobson al @cite_3 made an early attempt to this problem, but their results often look unnatural due to the frequently appeared artifacts such as loss of details, color distortion, and uneven exposure. Subsequent methods in this category focus on improving the results @cite_12 @cite_21 @cite_28 @cite_45 @cite_8 . However, they may also fail, especially for non-uniformly illuminated underexposed images. Our method also belongs to this category. However, by maintaining the proposed PBS, our method is able to robustly generate visually pleasing results free of the visual artifacts encountered by previous methods (see Fig. , Fig. and ).
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_21", "@cite_3", "@cite_45", "@cite_12" ], "mid": [ "2566376500", "", "2412926690", "2150721269", "", "2054814429" ], "abstract": [ "When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.", "", "We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.", "Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center surround retinex to a multiscale version that achieves simultaneous dynamic range compression color consistency lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.", "", "Image enhancement plays an important role in image processing and analysis. Among various enhancement algorithms, Retinex-based algorithms can efficiently enhance details and have been widely adopted. Since Retinex-based algorithms regard illumination removal as a default preference and fail to limit the range of reflectance, the naturalness of non-uniform illumination images cannot be effectively preserved. However, naturalness is essential for image enhancement to achieve pleasing perceptual quality. In order to preserve naturalness while enhancing details, we propose an enhancement algorithm for non-uniform illumination images. In general, this paper makes the following three major contributions. First, a lightness-order-error measure is proposed to access naturalness preservation objectively. Second, a bright-pass filter is proposed to decompose an image into reflectance and illumination, which, respectively, determine the details and the naturalness of the image. Third, we propose a bi-log transformation, which is utilized to map the illumination to make a balance between details and naturalness. Experimental results demonstrate that the proposed algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images." ] }
1907.10992
2963178965
This paper addresses the problem of enhancing underexposed photos. Existing methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they may fail to produce satisfactory results due to the presence of visual artifacts such as color distortion, loss of details and uneven exposure, etc. To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach in this paper. Our main observation is that, the reason why existing methods induce the artifacts is because they break a perceptual consistency between the input and the enhanced output. Based on this observation, an effective criterion, called perceptually bidirectional similarity (PBS) is proposed for preserving the perceptual consistency during enhancement. Particularly, we cast the underexposed photo enhancement as PBS-constrained illumination estimation optimization, where the PBS is defined as three constraints for estimating the illumination that can recover the enhancement results with normal exposure, distinct contrast, clear details and vivid color. To make our method more efficient and scalable to high-resolution images, we introduce a sampling-based strategy for accelerating the illumination estimation. Moreover, we extend our method to handle underexposed videos. Qualitative and quantitative comparisons as well as the user study demonstrate the superiority of our method over the state-of-the-art methods.
An increasing amount of efforts focus on investigating learning-based enhancement methods since the pioneering work of Bychkovsky al @cite_43 , which provides the first and largest MIT-Adobe FiveK dataset consisting of input output image pairs for tone adjustment. Yan al @cite_23 achieved automatic color enhancement by tackling a learning-to-rank problem, while Yan al @cite_14 enabled semantic-aware image enhancement. Recently, Lore al @cite_36 presented a deep autoencoder-based approach for enhancing low-light images. Gharbi al @cite_4 proposed bilateral learning to enable real-time image enhancement, while Chen al @cite_39 designed an unpaired learning model for image enhancement based on a two-way generative adversarial networks (GANs). The main limitation of learning-based methods is that they typically do not generalize well to images that do not exist in the training datasets.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_36", "@cite_39", "@cite_43", "@cite_23" ], "mid": [ "1920280450", "2735974062", "2254039850", "2798844427", "2025328853", "2113636985" ], "abstract": [ "Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.", "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.", "Abstract In surveillance, monitoring and tactical reconnaissance, gathering visual information from a dynamic environment and accurately processing such data are essential to making informed decisions and ensuring the success of a mission. Camera sensors are often cost-limited to capture clear images or videos taken in a poorly-lit environment. Many applications aim to enhance brightness, contrast and reduce noise content from the images in an on-board real-time manner. We propose a deep autoencoder-based approach to identify signal features from low-light images and adaptively brighten images without over-amplifying saturating the lighter parts in images with a high dynamic range. We show that a variant of the stacked-sparse denoising autoencoder can learn from synthetically darkened and noise-added training examples to adaptively enhance images taken from natural low-light environment and or are hardware-degraded. Results show significant credibility of the approach both visually and by quantitative comparison with various techniques.", "This paper proposes an unpaired learning method for image enhancement. Given a set of photographs with the desired characteristics, the proposed method learns a photo enhancer which transforms an input image into an enhanced image with those characteristics. The method is based on the framework of two-way generative adversarial networks (GANs) with several improvements. First, we augment the U-Net with global features and show that it is more effective. The global U-Net acts as the generator in our GAN model. Second, we improve Wasserstein GAN (WGAN) with an adaptive weighting scheme. With this scheme, training converges faster and better, and is less sensitive to parameters than WGAN-GP. Finally, we propose to use individual batch normalization layers for generators in two-way GANs. It helps generators better adapt to their own input distributions. All together, they significantly improve the stability of GAN training for our application. Both quantitative and visual results show that the proposed method is effective for enhancing images.", "Adjusting photographs to obtain compelling renditions requires skill and time. Even contrast and brightness adjustments are challenging because they require taking into account the image content. Photographers are also known for having different retouching preferences. As the result of this complexity, rule-based, one-size-fits-all automatic techniques often fail. This problem can greatly benefit from supervised machine learning but the lack of training data has impeded work in this area. Our first contribution is the creation of a high-quality reference dataset. We collected 5,000 photos, manually annotated them, and hired 5 trained photographers to retouch each picture. The result is a collection of 5 sets of 5,000 example input-output pairs that enable supervised learning. We first use this dataset to predict a user's adjustment from a large training set. We then show that our dataset and features enable the accurate adjustment personalization using a carefully chosen set of training photos. Finally, we introduce difference learning: this method models and predicts difference between users. It frees the user from using predetermined photos for training. We show that difference learning enables accurate prediction using only a handful of examples.", "We present a machine-learned ranking approach for automatically enhancing the color of a photograph. Unlike previous techniques that train on pairs of images before and after adjustment by a human user, our method takes into account the intermediate steps taken in the enhancement process, which provide detailed information on the person's color preferences. To make use of this data, we formulate the color enhancement task as a learning-to-rank problem in which ordered pairs of images are used for training, and then various color enhancements of a novel input image can be evaluated from their corresponding rank values. From the parallels between the decision tree structures we use for ranking and the decisions made by a human during the editing process, we posit that breaking a full enhancement sequence into individual steps can facilitate training. Our experiments show that this approach compares well to existing methods for automatic color enhancement." ] }
1907.10827
2963935048
Deep reinforcement learning has achieved great successes in recent years, but there are still open challenges, such as convergence to locally optimal policies and sample inefficiency. In this paper, we contribute a novel self-supervised auxiliary task, i.e., Terminal Prediction (TP), estimating temporal closeness to terminal states for episodic tasks. The intuition is to help representation learning by letting the agent predict how close it is to a terminal state, while learning its control policy. Although TP could be integrated with multiple algorithms, this paper focuses on Asynchronous Advantage Actor-Critic (A3C) and demonstrating the advantages of A3C-TP. Our extensive evaluation includes: a set of Atari games, the BipedalWalker domain, and a mini version of the recently proposed multi-agent Pommerman game. Our results on Atari games and the BipedalWalker domain suggest that A3C-TP outperforms standard A3C in most of the tested domains and in others it has similar performance. In Pommerman, our proposed method provides significant improvement both in learning efficiency and converging to better policies against different opponents.
Reinforcement learning approaches mainly fall under three categories: value-based methods such as Q-learning @cite_1 or Deep-Q Network @cite_6 ; policy-based methods such as REINFORCE @cite_26 ; and a combination of value- and policy-based techniques, i.e. actor-critic methods @cite_25 . In particular, in the last category several distributed actor-critic based DRL algorithms have been recently proposed @cite_22 . One notable example is A3C (Asynchronous Advantage Actor-Critic) @cite_16 , which is an algorithm that employs a asynchronous training scheme (using multiple CPU cores) for efficiency.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_1", "@cite_6", "@cite_16", "@cite_25" ], "mid": [ "2119717200", "2950872548", "", "2145339207", "2260756217", "" ], "abstract": [ "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth.", "", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "" ] }
1907.10827
2963935048
Deep reinforcement learning has achieved great successes in recent years, but there are still open challenges, such as convergence to locally optimal policies and sample inefficiency. In this paper, we contribute a novel self-supervised auxiliary task, i.e., Terminal Prediction (TP), estimating temporal closeness to terminal states for episodic tasks. The intuition is to help representation learning by letting the agent predict how close it is to a terminal state, while learning its control policy. Although TP could be integrated with multiple algorithms, this paper focuses on Asynchronous Advantage Actor-Critic (A3C) and demonstrating the advantages of A3C-TP. Our extensive evaluation includes: a set of Atari games, the BipedalWalker domain, and a mini version of the recently proposed multi-agent Pommerman game. Our results on Atari games and the BipedalWalker domain suggest that A3C-TP outperforms standard A3C in most of the tested domains and in others it has similar performance. In Pommerman, our proposed method provides significant improvement both in learning efficiency and converging to better policies against different opponents.
Another related work to ours is the UNREAL framework @cite_22 which is built on top of the A3C with several refinements and integration. In particular, UNREAL proposes to learn a reward prediction based task besides a pixel-control based task to speed up learning by improving representation learning. In contrast to on-policy A3C, UNREAL uses an experience replay buffer that is sampled with more priority given to positively rewarded interactions to improve the critic network. Our method, A3C-TP, differs from UNREAL in several ways: (i) We do not introduce the additional critic improvement step -- to better isolate the gain of our auxiliary task over vanilla A3C. (ii) Even though we also integrate an auxiliary task, we keep the resulting method still on-policy with minimal refinements without an experience replay buffer which might require correction for stale experience data. (iii) UNREAL's reward-prediction requires class balancing of observed rewards in an off-policy fashion depending on the game reward sparsity and distribution whereas is balanced automatically, it can be applied within on-policy DRL methods, and it generalizes better for episodic tasks independently of the domain-specific reward distribution.
{ "cite_N": [ "@cite_22" ], "mid": [ "2950872548" ], "abstract": [ "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth." ] }
1907.10628
2963606129
Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.
A large number of methods have been proposed to tackle the domain adaptation problem. The basic common structure that has been followed is the Siamese architecture @cite_35 with two streams, representing the source and target models. It is trained with a combination of a classification loss and the other being one of discrepancy loss or an adversarial loss. The classification loss depends on the source data label, while the discrepancy loss reduces the shift between the two domains. A discrepancy based deep learning method is that of deep domain confusion (DDC) @cite_43 . The loss between a single FC (fully connected) layer of source and target feature extractor network is used to minimize the maximum mean discrepancy (MMD) between the source and the target. This approach is further extended by deep adaptation network (DAN) @cite_40 . Recently, a number of other methods have been proposed which use discrepancy of domain @cite_42 @cite_54 @cite_23 @cite_52 @cite_48 @cite_45 @cite_22 @cite_14 . Other similar works are also applied in vision and language work @cite_55 @cite_17
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_22", "@cite_48", "@cite_54", "@cite_42", "@cite_55", "@cite_52", "@cite_43", "@cite_40", "@cite_45", "@cite_23", "@cite_17" ], "mid": [ "2127589108", "2312004824", "2964278684", "2963275094", "", "2962687275", "2963466731", "2584886900", "1565327149", "2159291411", "2963777311", "2964288524", "2887991371" ], "abstract": [ "This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a \"Siamese\" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.", "The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared . We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.", "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.", "Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being \"frustratingly easy\" to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple–it can be implemented in four lines of Matlab code–CORAL performs remarkably well in extensive evaluations on standard benchmark datasets.", "", "In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https: github.com mil-tokyo MCD_DA", "In this paper we aim to answer questions based on images when provided with a dataset of question-answer pairs for a number of images during training. A number of methods have focused on solving this problem by using image based attention. This is done by focusing on a specific part of the image while answering the question. Humans also do so when solving this problem. However, the regions that the previous systems focus on are not correlated with the regions that humans focus on. The accuracy is limited due to this drawback. In this paper, we propose to solve this problem by using an exemplar based method. We obtain one or more supporting and opposing exemplars to obtain a differential attention region. This differential attention is closer to human attention than other image based attention methods. It also helps in obtaining improved accuracy when answering questions. The method is evaluated on challenging benchmark datasets. We perform better than other image based attention methods and are competitive with other state of the art methods that focus on both image and questions.", "In this chapter, we present CORrelation ALignment (CORAL), a simple yet effective method for unsupervised domain adaptation. CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces. It is also much simpler than other distribution matching methods. CORAL performs remarkably well in extensive evaluations on standard benchmark datasets. We first describe a solution that applies a linear transformation to source features to align them with target features before classifier training. For linear classifiers, we propose to equivalently apply CORAL to the classifier weights, leading to added efficiency when the number of classifiers is small but the number and dimensionality of target examples are very high. The resulting CORAL Linear Discriminant Analysis (CORAL-LDA) outperforms LDA by a large margin on standard domain adaptation benchmarks. Finally, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (DNNs). The resulting Deep CORAL approach works seamlessly with DNNs and achieves state-of-the-art performance on standard benchmark datasets. Our code is available at: https: github.com VisionLearningGroup CORAL.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "", "Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL [18] is a simple unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance. Our code is available at: https: github.com VisionLearningGroup CORAL.", "" ] }
1907.10628
2963606129
Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.
In the domain adaptation setting, an adversarial network provides domain invariant representations by making the source and target domain indistinguishable by the discriminator. Adversarial Discriminative Domain Adaptation @cite_26 uses an inverted label GAN loss to split the optimization into two independent objectives. One such method is the domain confusion based model proposed in @cite_25 that considers a domain confusion objective. Domain-Adversarial Neural Networks (DANN) @cite_2 integrates a gradient reversal layer into the standard architecture to promote the emergence of the learned representations that are discriminative for the main learning task on the source domain and non-discriminative concerning the shift between the domains. Recently, some works have been proposed which use an adversarial discriminative approach in solving the domain adaptation problem @cite_15 @cite_12 @cite_0 @cite_46 @cite_3 @cite_13 @cite_51 @cite_18 . Similarly, the model proposed in @cite_44 @cite_49 exploits GANs with the aim to generate source-domain images such that they appear as if they were drawn from the target domain distribution. The closest related work to our approach is the work by @cite_10 that extends the gradient reversal method by a class-specific discriminator.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_46", "@cite_10", "@cite_3", "@cite_0", "@cite_44", "@cite_49", "@cite_2", "@cite_51", "@cite_15", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2815449720", "2593768305", "2962986791", "2788768841", "2798377719", "2584009249", "", "", "2963826681", "2948959975", "2767382337", "2798658180", "2214409633", "2767657961" ], "abstract": [ "", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "This paper proposes an importance weighted adversarial nets-based method for unsupervised domain adaptation, specific for partial domain adaptation where the target domain has less number of classes compared to the source domain. Previous domain adaptation methods generally assume the identical label spaces, such that reducing the distribution divergence leads to feasible knowledge transfer. However, such an assumption is no longer valid in a more realistic scenario that requires adaptation from a larger and more diverse source domain to a smaller target domain with less number of classes. This paper extends the adversarial nets-based domain adaptation and proposes a novel adversarial nets-based partial domain adaptation method to identify the source samples that are potentially from the outlier classes and, at the same time, reduce the shift of shared classes between domains.", "", "Unsupervised Domain Adaptation (UDA) aims to transfer domain knowledge from existing well-defined tasks to new ones where labels are unavailable. In the real-world applications, as the domain (task) discrepancies are usually uncontrollable, it is significantly motivated to match the feature distributions even if the domain discrepancies are disparate. Additionally, as no label is available in the target domain, how to successfully adapt the classifier from the source to the target domain still remains an open question. In this paper, we propose the Re-weighted Adversarial Adaptation Network (RAAN) to reduce the feature distribution divergence and adapt the classifier when domain discrepancies are disparate. Specifically, to alleviate the need of common supports in matching the feature distribution, we choose to minimize optimal transport (OT) based Earth-Mover (EM) distance and reformulate it to a minimax objective function. Utilizing this, RAAN can be trained in an end-to-end and adversarial manner. To further adapt the classifier, we propose to match the label distribution and embed it into the adversarial training. Finally, after extensive evaluation of our method using UDA datasets of varying difficulty, RAAN achieved the state-of-the-art results and outperformed other methods by a large margin when the domain shifts are disparate.", "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.", "", "", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "In this paper, we tackle the problem of domain generalization: how to learn a generalized feature representation for an \"unseen\" target domain by taking the advantage of multiple seen source-domain data. We present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization. To be specific, we extend adversarial autoencoders by imposing the Maximum Mean Discrepancy (MMD) measure to align the distributions among different domains, and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, the learned feature representation is supposed to be universal to the seen source domains because of the MMD regularization, and is expected to generalize well on the target domain because of the introduction of the prior distribution. We proposed an algorithm to jointly train different components of our proposed framework. Extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains." ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
Deep learning has significantly advanced state-of-the-art for hand pose estimation. The general trend has been the development of ever deeper and more sophisticated neural network architectures @cite_13 @cite_37 @cite_54 @cite_62 @cite_49 @cite_5 @cite_55 . However, such progress has also hinged on the availability of large amounts of annotated data @cite_51 @cite_57 @cite_50 . Obtaining accurate annotations, even for simple 3D joint coordinates, is extremely difficult and time consuming. Annotations generated by manually initializing trackers @cite_51 @cite_52 require carefully designed interfaces for 3D annotation on a 2D screen and there is often little consensus between human annotators @cite_58 . Motion-capture rigs @cite_50 and auxiliary sensors @cite_57 are fully automatic but are limited in the scenes in which they can be deployed. To mitigate the limitations of annotation, semi-supervised approaches @cite_7 @cite_56 @cite_10 and approaches coupling synthesized with real data @cite_0 @cite_59 @cite_31 have also been proposed.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_31", "@cite_7", "@cite_10", "@cite_54", "@cite_55", "@cite_52", "@cite_56", "@cite_57", "@cite_0", "@cite_49", "@cite_50", "@cite_51", "@cite_5", "@cite_59", "@cite_58", "@cite_13" ], "mid": [ "2963119249", "2896229066", "2962956488", "2963377353", "2964210253", "", "2963950354", "2963577185", "2892644985", "2606965392", "2963709863", "", "2963488642", "2075156252", "2799191197", "2901908207", "2214145768", "2750326862" ], "abstract": [ "DeepPrior [18] is a simple approach based on Deep Learning that predicts the joint 3D locations of a hand given a depth map. Since its publication early 2015, it has been outperformed by several impressive works. Here we show that with simple improvements: adding ResNet layers, data augmentation, and better initial hand localization, we achieve better or similar performance than more sophisticated recent methods on the three main benchmarks (NYU, ICVL, MSRA) while keeping the simplicity of the original method. Our new implementation is available at https: github.com moberweger deep-prior-pp.", "Convolutional Neural Networks (CNNs)-based methods for 3D hand pose estimation with depth cameras usually take 2D depth images as input and directly regress holistic 3D hand pose. Different from these methods, our proposed Point-to-Point Regression PointNet directly takes the 3D point cloud as input and outputs point-wise estimations, i.e., heat-maps and unit vector fields on the point cloud, representing the closeness and direction from every point in the point cloud to the hand joint. The point-wise estimations are used to infer 3D joint locations with weighted fusion. To better capture 3D spatial information in the point cloud, we apply a stacked network architecture for PointNet with intermediate supervision, which is trained end-to-end. Experiments show that our method can achieve outstanding results when compared with state-of-the-art methods on three challenging hand pose datasets.", "We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.", "State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose modelling the statistical relationship of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose or into a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth map. To prevent over-fitting and to better exploit unlabeled depth maps, the generator and discriminator are trained jointly. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized samples and unlabeled depth maps. The proposed discriminator network architecture is highly efficient and runs at 90fps on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks.", "The labeled data required to learn pose estimation for articulated objects is difficult to provide in the desired quantity, realism, density, and accuracy. To address this issue, we develop a method to learn representations, which are very specific for articulated poses, without the need for labeled training data. We exploit the observation that the object pose of a known object is predictive for the appearance in any known view. That is, given only the pose and shape parameters of a hand, the hand's appearance from any viewpoint can be approximated. To exploit this observation, we train a model that - given input from one view - estimates a latent representation, which is trained to be predictive for the appearance of the object when captured from another viewpoint. Thus, the only necessary supervision is the second view. The training process of this model reveals an implicit pose representation in the latent space. Importantly, at test time the pose representation can be inferred using only a single view. In qualitative and quantitative experiments we show that the learned representations capture detailed pose information. Moreover, when training the proposed method jointly with labeled and unlabeled data, it consistently surpasses the performance of its fully supervised counterpart, while reducing the amount of needed labeled samples by at least one order of magnitude.", "", "We present a simple and effective method for 3D hand pose estimation from a single depth frame. As opposed to previous state-of-the-art methods based on holistic 3D regression, our method works on dense pixel-wise estimation. This is achieved by careful design choices in pose parameterization, which leverages both 2D and 3D properties of depth map. Specifically, we decompose the pose parameters into a set of per-pixel estimations, i.e., 2D heat maps, 3D heat maps and unit 3D directional vector fields. The 2D 3D joint heat maps and 3D joint offsets are estimated via multitask network cascades, which is trained end-to-end. The pixel-wise estimations can be directly translated into a vote casting scheme. A variant of mean shift is then used to aggregate local votes while enforcing consensus between the the estimated 3D pose and the pixel-wise 2D and 3D estimations by design. Our method is efficient and highly accurate. On MSRA and NYU hand dataset, our method outperforms all previous state-of-the-art approaches by a large margin. On the ICVL hand dataset, our method achieves similar accuracy compared to the nearly saturated result obtained by [5] and outperforms various other proposed methods. Code is available online1.", "While many recent hand pose estimation methods critically rely on a training set of labelled frames, the creation of such a dataset is a challenging task that has been overlooked so far. As a result, existing datasets are limited to a few sequences and individuals, with limited accuracy, and this prevents these methods from delivering their full potential. We propose a semi-automated method for efficiently and accurately labeling each frame of a hand depth video with the corresponding 3D locations of the joints: The user is asked to provide only an estimate of the 2D reprojections of the visible joints in some reference frames, which are automatically selected to minimize the labeling work by efficiently optimizing a sub-modular loss function. We then exploit spatial, temporal, and appearance constraints to retrieve the full 3D poses of the hand over the complete sequence. We show that this data can be used to train a recent state-of-the-art hand pose estimation method, leading to increased accuracy.", "Compared with depth-based 3D hand pose estimation, it is more challenging to infer 3D hand pose from monocular RGB images, due to substantial depth ambiguity and the difficulty of obtaining fully-annotated training data. Different from existing learning-based monocular RGB-input approaches that require accurate 3D annotations for training, we propose to leverage the depth images that can be easily obtained from commodity RGB-D cameras during training, while during testing we take only RGB inputs for 3D joint predictions. In this way, we alleviate the burden of the costly 3D annotations in real-world dataset. Particularly, we propose a weakly-supervised method, adaptating from fully-annotated synthetic dataset to weakly-labeled real-world dataset with the aid of a depth regularizer, which generates depth maps from predicted 3D pose and serves as weak supervision for 3D pose regression. Extensive experiments on benchmark datasets validate the effectiveness of the proposed depth regularizer in both weakly-supervised and fully-supervised settings.", "In this paper we introduce a large-scale hand pose dataset, collected using a novel capture method. Existing datasets are either generated synthetically or captured using depth sensors: synthetic datasets exhibit a certain level of appearance difference from real depth images, and real datasets are limited in quantity and coverage, mainly due to the difficulty to annotate them. We propose a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps captured with minimal restriction on the range of motion. The capture protocol aims to fully cover the natural hand pose space. As shown in embedding plots, the new dataset exhibits a significantly wider and denser range of hand poses compared to existing benchmarks. Current state-of-the-art methods are evaluated on the dataset, and we demonstrate significant improvements in cross-benchmark performance. We also show significant improvements in egocentric hand pose estimation with a CNN trained on the new dataset.", "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.", "", "We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.", "We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.", "Convolutional Neural Network (CNN) has shown promising results for 3D hand pose estimation in depth images. Different from existing CNN-based hand pose estimation methods that take either 2D images or 3D volumes as the input, our proposed Hand PointNet directly processes the 3D point cloud that models the visible surface of the hand for pose regression. Taking the normalized point cloud as the input, our proposed hand pose regression network is able to capture complex hand structures and accurately regress a low dimensional representation of the 3D hand pose. In order to further improve the accuracy of fingertips, we design a fingertip refinement network that directly takes the neighboring points of the estimated fingertip location as input to refine the fingertip location. Experiments on three challenging hand pose datasets show that our proposed method outperforms state-of-the-art methods.", "Data labeling for learning 3D hand pose estimation models is a huge effort. Readily available, accurately labeled synthetic data has the potential to reduce the effort. However, to successfully exploit synthetic data, current state-of-the-art methods still require a large amount of labeled real data. In this work, we remove this requirement by learning to map from the features of real data to the features of synthetic data mainly using a large amount of synthetic and unlabeled real data. We exploit unlabeled data using two auxiliary objectives, which enforce that (i) the mapped representation is pose specific and (ii) at the same time, the distributions of real and synthetic data are aligned. While pose specifity is enforced by a self-supervisory signal requiring that the representation is predictive for the appearance from different views, distributions are aligned by an adversarial term. In this way, we can significantly improve the results of the baseline system, which does not use unlabeled data and outperform many recent approaches already with about 1 of the labeled real data. This presents a step towards faster deployment of learning based hand pose estimation, making it accessible for a larger range of applications.", "Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and will release all software and evaluation code. We summarize important conclusions here: (1) Pose estimation appears roughly solved for scenes with isolated hands. However, methods still struggle to analyze cluttered scenes where hands may be interacting with nearby objects and surfaces. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress.", "Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms." ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
An alternative line of work @cite_29 @cite_53 @cite_60 @cite_63 @cite_9 @cite_64 @cite_45 @cite_14 tackles hand pose estimation by minimizing a model-fitting error. Model-fitting needs little to no human labels, but the accuracy is heavily dependent on the careful design of the energy function. A recent trend tries to bridge the gap between data-driven and model-fitting approaches @cite_26 @cite_15 @cite_41 by using a differentiable renderer and incorporating the model-fitting error as a part of the training loss. Our work resembles these methods, though we have two key differences. First, we re-parameterize the mesh with a 2D embedding, which allows us to use a 2D fully convolutional network architecture. Secondly, we can apply self-supervision on both the image grid and the mesh grid, leading to efficient gradient flow during back-propagation.
{ "cite_N": [ "@cite_64", "@cite_14", "@cite_26", "@cite_60", "@cite_41", "@cite_29", "@cite_9", "@cite_53", "@cite_45", "@cite_63", "@cite_15" ], "mid": [ "2423984454", "2768466711", "", "", "2964093990", "1995713470", "2218414108", "1990947293", "2520346623", "", "" ], "abstract": [ "We present a fast, practical method for personalizing a hand shape basis to an individual user's detailed hand shape using only a small set of depth images. To achieve this, we minimize an energy based on a sum of render-and-compare cost functions called the golden energy. However, this energy is only piecewise continuous, due to pixels crossing occlusion boundaries, and is therefore not obviously amenable to efficient gradient-based optimization. A key insight is that the energy is the combination of a smooth low-frequency function with a high-frequency, low-amplitude, piecewisecontinuous function. A central finite difference approximation with a suitable step size can therefore jump over the discontinuities to obtain a good approximation to the energy's low-frequency behavior, allowing efficient gradient-based optimization. Experimental results quantitatively demonstrate for the first time that detailed personalized models improve the accuracy of hand tracking and achieve competitive results in both tracking and model registration.", "The state of the art in articulated hand tracking has been greatly advanced by hybrid methods that fit a generative hand model to depth data, leveraging both temporally and discriminatively predicted starting poses. In this paradigm, the generative model is used to define an energy function and a local iterative optimization is performed from these starting poses in order to find a \"good local minimum\" (i.e. a local minimum close to the true pose). Performing this optimization quickly is key to exploring more starting poses, performing more iterations and, crucially, exploiting high frame rates that ensure that temporally predicted starting poses are in the basin of convergence of a good local minimum. At the same time, a detailed and accurate generative model tends to deepen the good local minima and widen their basins of convergence. Recent work, however, has largely had to trade-off such a detailed hand model with one that facilitates such rapid optimization. We present a new implicit model of hand geometry that mostly avoids this compromise and leverage it to build an ultra-fast hybrid hand tracking system. Specifically, we construct an articulated signed distance function that, for any pose, yields a closed form calculation of both the distance to the detailed surface geometry and the necessary derivatives to perform gradient based optimization. There is no need to introduce or update any explicit \"correspondences\" yielding a simple algorithm that maps well to parallel hardware such as GPUs. As a result, our system can run at extremely high frame rates (e.g. up to 1000fps). Furthermore, we demonstrate how to detect, segment and optimize for two strongly interacting hands, recovering complex interactions at extremely high framerates. In the absence of publicly available datasets of sufficiently high frame rate, we leverage a multiview capture system to create a new 180fps dataset of one and two hands interacting together or with objects.", "", "", "", "This paper presents a method for acquiring dense nonrigid shape and deformation from a single monocular depth sensor. We focus on modeling the human hand, and assume that a single rough template model is available. We combine and extend existing work on model-based tracking, subdivision surface fitting, and mesh deformation to acquire detailed hand models from as few as 15 frames of depth data. We propose an objective that measures the error of fit between each sampled data point and a continuous model surface defined by a rigged control mesh, and uses as-rigid-as-possible (ARAP) regularizers to cleanly separate the model and template geometries. A key contribution is our use of a smooth model based on subdivision surfaces that allows simultaneous optimization over both correspondences and model parameters. This avoids the use of iterated closest point (ICP) algorithms which often lead to slow convergence. Automatic initialization is obtained using a regression forest trained to infer approximate correspondences. Experiments show that the resulting meshes model the user's hand shape more accurately than just adapting the shape parameters of the skeleton, and that the retargeted skeleton accurately models the user's articulations. We investigate the effect of various modeling choices, and show the benefits of using subdivision surfaces and ARAP regularization.", "We address the problem of hand pose estimation, formulated as an inverse problem. Typical approaches optimize an energy function over pose parameters using a 'black box' image generation procedure. This procedure knows little about either the relationships between the parameters or the form of the energy function. In this paper, we show that we can significantly improving upon black box optimization by exploiting high-level knowledge of the structure of the parameters and using a local surrogate energy function. Our new framework, hierarchical sampling optimization, consists of a sequence of predictors organized into a kinematic hierarchy. Each predictor is conditioned on its ancestors, and generates a set of samples over a subset of the pose parameters. The highly-efficient surrogate energy is used to select among samples. Having evaluated the full hierarchy, the partial pose samples are concatenated to generate a full-pose hypothesis. Several hypotheses are generated using the same procedure, and finally the original full energy function selects the best result. Experimental evaluation on three publically available datasets show that our method is particularly impressive in low-compute scenarios where it significantly outperforms all other state-of-the-art methods.", "We present a realtime hand tracking system using a depth sensor. It tracks a fully articulated hand under large viewpoints in realtime (25 FPS on a desktop without using a GPU) and with high accuracy (error below 10 mm). To our knowledge, it is the first system that achieves such robustness, accuracy, and speed simultaneously, as verified on challenging real data. Our system is made of several novel techniques. We model a hand simply using a number of spheres and define a fast cost function. Those are critical for realtime performance. We propose a hybrid method that combines gradient based and stochastic optimization methods to achieve fast convergence and good accuracy. We present new finger detection and hand initialization methods that greatly enhance the robustness of tracking.", "Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to difficult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classification of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.", "", "" ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
It is highly intuitive to parameterize 3D inputs and or outputs as an occupancy grid or distance field and use for example a 3D voxel net @cite_46 @cite_42 @cite_49 . However, such an architecture is parameter heavy and severely limited in spatial resolution. PointNet @cite_18 is a light-weight alternative and while it can interpret 3D inputs a set of un-ordered points, it also largely ignores spatial contexts which may be important downstream.
{ "cite_N": [ "@cite_46", "@cite_42", "@cite_18", "@cite_49" ], "mid": [ "2737305288", "2797515701", "2950642167", "" ], "abstract": [ "We propose a simple, yet effective approach for real-time hand pose estimation from single depth images using three-dimensional Convolutional Neural Networks (3D CNNs). Image based features extracted by 2D CNNs are not directly suitable for 3D hand pose estimation due to the lack of 3D spatial information. Our proposed 3D CNN taking a 3D volumetric representation of the hand depth image as input can capture the 3D spatial structure of the input and accurately regress full 3D hand pose in a single pass. In order to make the 3D CNN robust to variations in hand sizes and global orientations, we perform 3D data augmentation on the training data. Experiments show that our proposed 3D CNN based approach outperforms state-of-the-art methods on two challenging hand pose datasets, and is very efficient as our implementation runs at over 215 fps on a standard computer with a single GPU.", "Human shape estimation is an important task for video editing , animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "" ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
Since captured 3D inputs are inherently object surfaces, it is natural to consider them as a 2D embedding in 3D Euclidean space. As such, several works @cite_12 @cite_19 @cite_24 have modeled mesh surfaces as a graph and have applied graph network architectures to capture intrinsic and extrinsic geometric properties of the mesh. Our method also works on the hand surface, but it is a much simpler and more flexible network architecture which is easier to train and can handle different mesh topologies. Our method most resembles @cite_3 @cite_17 by mapping high dimension data to a 2D grid. However, instead of just working on points from depth map, we propose a dual grid network architecture, enabling the mapping of heterogeneous data from Euclidean space to mesh surfaces and vice versa.
{ "cite_N": [ "@cite_3", "@cite_24", "@cite_19", "@cite_12", "@cite_17" ], "mid": [ "2788158258", "2883221003", "", "2964321699", "2964253930" ], "abstract": [ "We present a network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice. NaA¯vely applying convolutions on this lattice scales poorly, both in terms of memory and computational cost, as the size of the lattice increases. Instead, our network uses sparse bilateral convolutional layers as building blocks. These layers maintain efficiency by using indexing structures to apply convolutions only on occupied parts of the lattice, and allow flexible specifications of the lattice structure enabling hierarchical and spatially-aware feature learning, as well as joint 2D-3D reasoning. Both point-based and image-based representations can be easily incorporated in a network with such layers and the resulting model can be trained in an end-to-end manner. We present results on 3D segmentation tasks where our approach outperforms existing state-of-the-art techniques.", "Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50 lower reconstruction error, while using 75 fewer parameters. We show that, replacing the expression space of an existing state-of-the-art face model with our model, achieves a lower reconstruction error. Our data, model and code are available at http: coma.is.tue.mpg.de .", "", "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.", "This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds." ] }
1907.10781
2963008717
Nowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .
One of the related fields is document summarization. The methods can be divided into extractive methods @cite_0 @cite_6 @cite_7 @cite_2 @cite_5 @cite_17 @cite_19 @cite_16 and abstractive methods @cite_13 @cite_14 @cite_10 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_10", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2341401723", "2122311631", "2741375528", "1962684803", "2150869743", "2574535369", "2138952379", "1959120443", "2735674392", "1843891098", "2072050228" ], "abstract": [ "In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.", "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a margin-based objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.", "", "We treat the text summarization problem as maximizing a submodular function under a budget constraint. We show, both theoretically and empirically, a modified greedy algorithm can efficiently solve the budgeted submodular maximization problem near-optimally, and we derive new approximation bounds in doing so. Experiments on DUC'04 task show that our approach is superior to the best-performing method from the DUC'04 evaluation on ROUGE-1 scores.", "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the sub-sentence or \"concept-level, to a sentence-level model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.", "", "In this paper, we present a supervised learning approach to training submodular scoring functions for extractive multidocument summarization. By taking a structured prediction approach, we provide a large-margin method that directly optimizes a convex relaxation of the desired performance measure. The learning method applies to all submodular summarization methods, and we demonstrate its effectiveness for both pairwise as well as coverage-based scoring functions on multiple datasets. Compared to state-of-the-art functions that were tuned manually, our method significantly improves performance and enables high-fidelity models with number of parameters well beyond what could reasonably be tuned by hand.", "Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.", "As a framework for extractive summarization, sentence regression has achieved state-of-the-art performance in several widely-used practical systems. The most challenging task within the sentence regression framework is to identify discriminative features to encode a sentence into a feature vector. So far, sentence regression approaches have neglected to use features that capture contextual relations among sentences. We propose a neural network model, Contextual Relation-based Summarization (CRSum), to take advantage of contextual relations among sentences so as to improve the performance of sentence regression. Specifically, we first use sentence relations with a word-level attentive pooling convolutional neural network to construct sentence representations. Then, we use contextual relations with a sentence-level attentive pooling recurrent neural network to construct context representations. Finally, CRSum automatically learns useful contextual features by jointly learning representations of sentences and similarity scores between a sentence and sentences in its context. Using a two-level attention mechanism, CRSum is able to pay attention to important content, i.e., words and sentences, in the surrounding context of a given sentence. We carry out extensive experiments on six benchmark datasets. CRSum alone can achieve comparable performance with state-of-the-art approaches; when combined with a few basic surface features, it significantly outperforms the state-of-the-art in terms of multiple ROUGE metrics.", "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.", "People often read summaries of news articles in order to get reliable information about an event or a topic. However, the information expressed in news articles is not always certain, and some sentences contain uncertain information about the event. Existing summarization systems do not consider whether a sentence in news articles is certain or not. In this paper, we propose a novel system called CTSUM to incorporate the new factor of information certainty into the summarization task. We first analyze the sentences in news articles and automatically predict the certainty levels of sentences by using the support vector regression method with a few useful features. The predicted certainty scores are then incorporated into a summarization system with a graph-based ranking algorithm. Experimental results on a manually labeled dataset verify the effectiveness of the sentence certainty prediction technique, and experimental results on the DUC2007 dataset shows that our new summarization system cannot only produce summaries with better content quality, but also produce summaries with higher certainty." ] }
1907.10781
2963008717
Nowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .
There are several pilot studies on producing long articles from a batch of news articles or web pages @cite_4 @cite_12 @cite_15 . However, the generated overview articles do not have good structures and there are no interaction functions.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_12" ], "mid": [ "2787214294", "1995067232", "2771080244" ], "abstract": [ "We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.", "This paper proposes a general framework, named Autopedia, to generate high-quality wikipedia articles for given concepts in any domains, by automatically selecting the best wikipedia template consisting the sub-topics to organize the article for the input concept. Experimental results on 4,526 concepts validate the effectiveness of Autopedia, and the wikipedia template selection approach which takes into account both the template quality and the semantic relatedness between the input concept and its sibling concepts, performs the best.", "" ] }
1907.10781
2963008717
Nowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .
There are some attempts of adding interaction functions into the traditional document summarization tasks @cite_3 @cite_11 . However, the above work focuses on producing short summaries and the generation of long news overview articles is more challenging. Moreover, in the above work, the keyphrases to represent salient information are extracted based on some heuristic rules or simple clues, and they are usually not good subtopic representations.
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "2120704803", "2110983154" ], "abstract": [ "This paper describes the Interactive Document Summariser (IDS). IDS provides dynamic control over document summary characteristics, such as length and topic focus, so that changes made by the user are instantly reflected in an on-screen summary. 'Summary-in-context' views allow users to move flexibly between summaries and their source documents. IDS adopts the technique of sentence extraction, exploiting keyphrases that are automatically extracted from document text as the primary attribute of a sentence extraction algorithm. We report an evaluation of IDS summaries, in which representative end-users of on-line documents identified relevant summary sentences in source documents. IDS summaries were then compared to the recommendations of the users and we report the efficacy of the summaries based on standard precision and recall measures. In addition, using established evaluation metrics we found that IDS summaries were better than baseline summaries based on within-document sentence ordering.", "We describe iNeATS -- an interactive multi-document summarization system that integrates a state-of-the-art summarization engine with an advanced user interface. Three main goals of the system are: (1) provide a user with control over the summarization process, (2) support exploration of the document set with the summary as the staring point, and (3) combine text summaries with alternative presentations such as a map-based visualization of documents." ] }
1907.10700
2963909142
We introduce a system and methods for the three-dimensional measurement of extended specular surfaces with high surface normal variations. Our system consists only of a mobile hand held device and exploits screen and front camera for Deflectometry-based surface measurements. We demonstrate high quality measurements without the need for an offline calibration procedure. In addition, we develop a multi-view technique to compensate for the small screen of a mobile device so that large surfaces can be densely reconstructed in their entirety. This work is a first step towards developing a self-calibrating Deflectometry procedure capable of taking 3D surface measurements of specular objects in the wild and accessible to users with little to no technical imaging experience.
The authors of @cite_14 used the reflection of color coded circles observed by multiple cameras (which also resolves the bas-relief ambiguity). In other works, the authors utilized self illuminated screens with patterns such as stripes @cite_2 , multiple lines @cite_0 , or even a light field created from two stacked LED screens @cite_11 . Screenless' methods, such as @cite_7 @cite_24 analyze environment illumination or track prominent features (e.g. straight lines) in the environment to obtain information about the slope of specular surfaces.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_0", "@cite_24", "@cite_2", "@cite_11" ], "mid": [ "2124958603", "2179918789", "2170554555", "", "2010544530", "2438151519" ], "abstract": [ "We present an novel algorithm that reconstructs voxels of a general 3D specular surface from multiple images of a calibrated camera. A calibrated scene (i.e. points whose 3D coordinates are known) is reflected by the unknown specular surface onto the image plane of the camera. For every viewpoint, surface normals are associated to the voxels traversed by each projection ray formed by the reflection of a scene point. A decision process then discards voxels whose associated surface normals are not consistent with one another. The output of the algorithm is a collection of voxels and surface normals in 3D space, whose quality and size depend on user-set thresholds. The method has been tested on synthetic and real images. Visual and quantified experimental results are presented.", "Reconstructing the surface of highly specular objects is a challenging task. The shapes of diffuse and rough specular objects can be captured in an uncontrolled setting using consumer equipment. In contrast, highly specular objects have previously deterred capture in uncontrolled environments and have only been reconstructed using tailor-made hardware. We propose a method to reconstruct such objects in uncontrolled environments using only commodity hardware. As input, our method expects multi-view photographs of the specular object, its silhouettes and an environment map of its surroundings. We compare the reflected colors in the photographs with the ones in the environment to form probability distributions over the surface normals. As the effect of inter-reflections cannot be ignored for highly specular objects, we explicitly model them when forming the probability distributions. We recover the shape of the object in an iterative process where we alternate between estimating normals and updating the shape of the object to better explain these normals. We run experiments on both synthetic and real-world data, that show our method is robust and produces accurate reconstructions with as few as 25 input photographs.", "We present a new shape-from-distortion framework for recovering specular (reflective refractive) surfaces. While most existing approaches rely on accurate correspondences between 2D pixels and 3D points, we focus on analyzing the curved images of 3D lines which we call curved line images or CLIs. Our approach models CLIs of local reflections or refractions using the recently proposed general linear cameras (GLCs). We first characterize all possible CLIs in a GLC. We show that a 3D line will appear as a conic in any GLC. For a fixed GLC, the conic type is invariant to the position and orientation of the line and is determined by the GLC parameters. Furthermore, CLIs under single reflection refraction can only be lines or hyperbolas. Based on our new theory, we develop efficient algorithms to use multiple CLIs to recover the GLC camera parameters. We then apply the curvature-GLC theory to derive the Gaussian and mean curvatures from the GLC intrinsics. This leads to a complete distortion-based reconstruction framework. Unlike conventional correspondence-based approaches that are sensitive to image distortions, our approach benefits from the CLI distortions. Finally, we demonstrate applying our framework for recovering curvature fields on both synthetic and real specular surfaces.", "", "Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self-coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.", "Mirror-type specular objects are difficult to reconstruct: they do not possess their own appearance and the reflections from environment are view-dependent. In this paper, we present a novel computational imaging solution for reconstructing the mirror-type specular objects. Specifically, we adopt a two-layer liquid crystal display (LCD) setup to encode the illumination directions. We devise an efficient ray coding scheme by only considering the useful rays. To recover the mirror-type surface, we derive a normal integration scheme under the perspective camera model. Since the resulting surface is determined up to a scale, we develop a single view approach to resolve the scale ambiguity. To acquire the object surface as completely as possible, we further develop a multiple-surface fusion algorithm to combine the surfaces recovered from different viewpoints. Both synthetic and real experiments demonstrate that our approach is reliable on recovering small to medium scale mirror-type objects." ] }
1907.10843
2963402660
Person re-identification (re-ID) solves the task of matching images across cameras and is among the research topics in vision community. Since query images in real-world scenarios might suffer from resolution loss, how to solve the resolution mismatch problem during person re-ID becomes a practical problem. Instead of applying separate image super-resolution models, we propose a novel network architecture of Resolution Adaptation and re-Identification Network (RAIN) to solve cross-resolution person re-ID. Advancing the strategy of adversarial learning, we aim at extracting resolution-invariant representations for re-ID, while the proposed model is learned in an end-to-end training fashion. Our experiments confirm that the use of our model can recognize low-resolution query images, even if the resolution is not seen during training. Moreover, the extension of our model for semi-supervised re-ID further confirms the scalability of our proposed method for real-world scenarios and applications.
Person re-ID has been widely studied in the literature. Most of the existing methods @cite_17 @cite_14 @cite_21 @cite_10 @cite_12 @cite_4 @cite_16 @cite_15 @cite_8 @cite_9 @cite_5 focus on tackling the challenges of matching images with viewpoint and pose variations, or those with background clutter or occlusion presented. For example, Liu al @cite_16 develop a pose-transferable GAN-based @cite_13 framework to address image pose variations. Chen al @cite_9 integrate the conditional random field (CRF) with deep neural networks to learn more consistent multi-scale similarity metrics. The DaRe @cite_19 combines the feature embeddings extracted from different convolutional layers into a single embedding to train the model in a supervised fashion. Several attention-based methods @cite_10 @cite_4 @cite_8 are further proposed to focus on learning the discriminative parts to mitigate the effect of background clutter. While promising results have been presented, the above approaches typically assume that all images (both query and gallery) are of the same (or similar) resolution, which might not be practical in real-world re-ID applications.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_21", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2099471712", "2604463754", "", "2798775284", "2798874329", "2796364723", "", "2798458055", "", "2798429327", "2795013471", "2962706983", "" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Abstract Person re-identification (re-ID) and attribute recognition share a common target at learning pedestrian descriptions. Their difference consists in the granularity. Most existing re-ID methods only take identity labels of pedestrians into consideration. However, we find the attributes, containing detailed local descriptions, are beneficial in allowing the re-ID model to learn more discriminative feature representations. In this paper, based on the complementarity of attribute labels and ID labels, we propose an attribute-person recognition (APR) network, a multi-task network which learns a re-ID embedding and at the same time predicts pedestrian attributes. We manually annotate attribute labels for two large-scale re-ID datasets, and systematically investigate how person re-ID and attribute recognition benefit from each other. In addition, we re-weight the attribute predictions considering the dependencies and correlations among the attributes. The experimental results on two large-scale re-ID benchmarks demonstrate that by learning a more discriminative representation, APR achieves competitive re-ID performance compared with the state-of-the-art methods. We use APR to speed up the retrieval process by ten times with a minor accuracy drop of 2.92 on Market-1501. Besides, we also apply APR on the attribute recognition task and demonstrate improvement over the baselines.", "", "Person Re-identification (ReID) is an important yet challenging task in computer vision. Due to the diverse background clutters, variations on viewpoints and body poses, it is far from solved. How to extract discriminative and robust features invariant to background clutters is the core problem. In this paper, we first introduce the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, then we design a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions. Moreover, we propose a novel region-level triplet loss to restrain the features learnt from different regions, i.e., pulling the features from the full image and body region close, whereas pushing the features from backgrounds away. We may be the first one to successfully introduce the binary mask into person ReID task and the first one to propose region-level contrastive learning. We evaluate the proposed method on three public datasets, including MARS, Market-1501 and CUHK03. Extensive experimental results show that the proposed method is effective and achieves the state-of-the-art results. Mask and code will be released upon request.", "Person re-identification benefits greatly from deep neural networks (DNN) to learn accurate similarity metrics and robust feature embeddings. However, most of the current methods impose only local constraints for similarity learning. In this paper, we incorporate constraints on large image groups by combining the CRF with deep neural networks. The proposed method aims to learn the \"local similarity\" metrics for image pairs while taking into account the dependencies from all the images in a group, forming \"group similarities\". Our method involves multiple images to model the relationships among the local and global similarities in a unified CRF during training, while combines multi-scale local similarities as the predicted similarity in testing. We adopt an approximate inference scheme for estimating the group similarity, enabling end-to-end training. Extensive experiments demonstrate the effectiveness of our model that combines DNN and CRF for learning robust multi-scale local similarities. The overall results outperform those by state-of-the-arts with considerable margins on three widely-used benchmarks.", "Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually, local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that by employing a yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 by 17 in mAP and 6 in rank-1, CUHK03 by 4 in rank-1 and DukeMTMC-reID by 24 in mAP and 10 in rank-1.", "", "Person re-identification aims at finding a person of interest in an image gallery by comparing the probe image of this person with all the gallery images. It is generally treated as a retrieval problem, where the affinities between the probe image and gallery images (P2G affinities) are used to rank the retrieved gallery images. However, most existing methods only consider P2G affinities but ignore the affinities between all the gallery images (G2G affinity). Some frameworks incorporated G2G affinities into the testing process, which is not end-to-end trainable for deep neural networks. In this paper, we propose a novel group-shuffling random walk network for fully utilizing the affinity information between gallery images in both the training and testing processes. The proposed approach aims at end-to-end refining the P2G affinities based on G2G affinity information with a simple yet effective matrix operation, which can be integrated into deep neural networks. Feature grouping and group shuffle are also proposed to apply rich supervisions for learning better person features. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets by large margins, which demonstrate the effectiveness of our approach.", "", "Person re-identification (ReID) is an important task in the field of intelligent security. A key challenge is how to capture human pose variations, while existing benchmarks (i.e., Market1501, DukeMTMC-reID, CUHK03, etc.) do NOT provide sufficient pose coverage to train a robust ReID system. To address this issue, we propose a pose-transferrable person ReID framework which utilizes posetransferred sample augmentations (i.e., with ID supervision) to enhance ReID model training. On one hand, novel training samples with rich pose variations are generated via transferring pose instances from MARS dataset, and they are added into the target dataset to facilitate robust training. On the other hand, in addition to the conventional discriminator of GAN (i.e., to distinguish between REAL FAKE samples), we propose a novel guider sub-network which encourages the generated sample (i.e., with novel pose) towards better satisfying the ReID loss (i.e., cross-entropy ReID loss, triplet ReID loss). In the meantime, an alternative optimization procedure is proposed to train the proposed Generator-Guider-Discriminator network. Experimental results on Market-1501, DukeMTMC-reID and CUHK03 show that our method achieves great performance improvement, and outperforms most state-of-the-art methods without elaborate designing the ReID model.", "Typical person re-identification (ReID) methods usually describe each pedestrian with a single feature vector and match them in a task-specific metric space. However, the methods based on a single feature vector are not sufficient enough to overcome visual ambiguity, which frequently occurs in real scenario. In this paper, we propose a novel end-to-end trainable framework, called Dual ATtention Matching network (DuATM), to learn context-aware feature sequences and perform attentive sequence comparison simultaneously. The core component of our DuATM framework is a dual attention mechanism, in which both intra-sequence and inter-sequence attention strategies are used for feature refinement and feature-pair alignment, respectively. Thus, detailed visual cues contained in the intermediate feature sequences can be automatically exploited and properly compared. We train the proposed DuATM network as a siamese network via a triplet loss assisted with a de-correlation loss and a cross-entropy loss. We conduct extensive experiments on both image and video based ReID benchmark datasets. Experimental results demonstrate the significant advantages of our approach compared to the state-of-the-art methods.", "Key to effective person re-identification (Re-ID) is modelling discriminative and view-invariant factors of person appearance at both high and low semantic levels. Recently developed deep Re-ID models either learn a holistic single semantic level feature representation and or require laborious human annotation of these factors as attributes. We propose Multi-Level Factorisation Net (MLFN), a novel network architecture that factorises the visual appearance of a person into latent discriminative factors at multiple semantic levels without manual annotation. MLFN is composed of multiple stacked blocks. Each block contains multiple factor modules to model latent factors at a specific level, and factor selection modules that dynamically select the factor modules to interpret the content of each input image. The outputs of the factor selection modules also provide a compact latent factor descriptor that is complementary to the conventional deeply learned features. MLFN achieves state-of-the-art results on three Re-ID datasets, as well as compelling results on the general object categorisation CIFAR-100 dataset.", "" ] }
1907.10843
2963402660
Person re-identification (re-ID) solves the task of matching images across cameras and is among the research topics in vision community. Since query images in real-world scenarios might suffer from resolution loss, how to solve the resolution mismatch problem during person re-ID becomes a practical problem. Instead of applying separate image super-resolution models, we propose a novel network architecture of Resolution Adaptation and re-Identification Network (RAIN) to solve cross-resolution person re-ID. Advancing the strategy of adversarial learning, we aim at extracting resolution-invariant representations for re-ID, while the proposed model is learned in an end-to-end training fashion. Our experiments confirm that the use of our model can recognize low-resolution query images, even if the resolution is not seen during training. Moreover, the extension of our model for semi-supervised re-ID further confirms the scalability of our proposed method for real-world scenarios and applications.
To address the challenging resolution mismatch problem, a couple of methods @cite_22 @cite_20 @cite_18 @cite_34 @cite_11 @cite_28 have been recently proposed. Li al @cite_22 present a joint learning framework that simultaneously optimizes cross-scale image domain alignment and discriminant distance metric modeling. The SLD @math L @cite_20 learns a pair of HR and LR dictionaries and the mapping between the feature representations of HR and LR images. Wang al @cite_18 explore the scale-distance function space by varying the image scale of LR images when matching with HR ones. Nevertheless, the above methods employ hand-crafted descriptors, which might limit the generalization of their re-ID capability.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_28", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2573751786", "2213726222", "2967084945", "", "1897123318", "2807957650" ], "abstract": [ "Person re-identification, as an important task in video surveillance and forensics applications, has been widely studied. But most of previous approaches are based on the key assumption that images for comparison have the same resolution and a uniform scale. Some recent works investigate how to match low resolution query images against high resolution gallery images, but still assume that the low-resolution query images have the same scale. In real scenarios, person images may not only be with low-resolution but also have different scales. Through investigating the distance variation behavior by changing image scales, we observe that scale-distance functions, generated by image pairs under different scales from the same person or different persons, are distinguishable and can be classified as feasible (for a pair of images from the same person) or infeasible (for a pair of images from different persons). The scale-distance functions are further represented by parameter vectors in the scale-distance function space. On this basis, we propose to learn a discriminating surface separating these feasible and infeasible functions in the scale-distance function space, and use it for reidentifying persons. Experimental results on two simulated datasets and one public dataset demonstrate the effectiveness of the proposed framework.", "In real world person re-identification (re-id), images of people captured at very different resolutions from different locations need be matched. Existing re-id models typically normalise all person images to the same size. However, a low-resolution (LR) image contains much less information about a person, and direct image scaling and simple size normalisation as done in conventional re-id methods cannot compensate for the loss of information. To solve this LR person re-id problem, we propose a novel joint multi-scale learning framework, termed joint multi-scale discriminant component analysis (JUDEA). The key component of this framework is a heterogeneous class mean discrepancy (HCMD) criterion for cross-scale image domain alignment, which is optimised simultaneously with discriminant modelling across multiple scales in the joint learning framework. Our experiments show that the proposed JUDEA framework outperforms existing representative re-id methods as well as other related LR visual matching models applied for the LR person re-id problem.", "Person re-identification (re-ID) aims at matching images of the same identity across camera views. Due to varying distances between cameras and persons of interest, resolution mismatch can be expected, which would degrade person re-ID performance in real-world scenarios. To overcome this problem, we propose a novel generative adversarial network to address cross-resolution person re-ID, allowing query images with varying resolutions. By advancing adversarial learning techniques, our proposed model learns resolution-invariant image representations while being able to recover the missing details in low-resolution input images. The resulting features can be jointly applied for improving person re-ID performance due to preserving resolution invariance and recovering re-ID oriented discriminative details. Our experiments on five benchmark datasets confirm the effectiveness of our approach and its superiority over the state-of-the-art methods, especially when the input resolutions are unseen during training.", "", "Person re-identification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images are high-resolution (HR) while probe images are usually low-resolution (LR) in the identification scenarios with large variation of illumination, weather or quality of cameras. Person re-identification in this kind of scenarios, which we call super-resolution (SR) person re-identification, has not been well studied. In this paper, we propose a semi-coupled low-rank discriminant dictionary learning (SLD2L) approach for SR person re-identification. For the given training image set which consists of HR gallery and LR probe images, we aim to convert the features of LR images into discriminating HR features. Specifically, our approach learns a pair of HR and LR dictionaries and a mapping from the features of HR gallery images and LR probe images. To ensure that the converted features using the learned dictionaries and mapping have favorable discriminative capability, we design a discriminant term which requires the converted HR features of LR probe images should be close to the features of HR gallery images from the same person, but far away from the features of HR gallery images from different persons. In addition, we apply low-rank regularization in dictionary learning procedure such that the learned dictionaries can well characterize intrinsic feature space of HR and LR images. Experimental results on public datasets demonstrate the effectiveness of SLD2L.", "" ] }
1907.10738
2962927471
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic. Recently a challenge involving such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving linguistic understanding as well as reasoning with common knowledge. In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0 accuracy, an 11.6 improvement over the current state of the art.
Among these, the closest to our work is the work in @cite_22 which perform QA using fine tuned language model and the works of @cite_6 @cite_17 which performs QA using external knowledge.
{ "cite_N": [ "@cite_22", "@cite_6", "@cite_17" ], "mid": [ "2896457183", "2964222271", "2805387248" ], "abstract": [ "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "", "The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which mimics human solving process in an open-book exam. To address the reasoning challenge, we construct contextual knowledge graphs respectively for the question itself and supporting sentences. Our model learns to reason with neural embeddings of both knowledge graphs. Experiments on the ARC Challenge Set show that our model outperforms the previous state-of-the-art QA systems." ] }
1907.10738
2962927471
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic. Recently a challenge involving such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving linguistic understanding as well as reasoning with common knowledge. In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0 accuracy, an 11.6 improvement over the current state of the art.
Related to our work for extracting missing knowledge are the works of @cite_1 @cite_9 @cite_15 which respectively generate a query either by extracting key terms from a question and an answer option or by classifying key terms or by Seq2Seq models to generate key terms. In comparison, we generate queries using the question, an answer option and an extracted fact using natural language abduction. The task of natural language abduction for natural language understanding has been studied for a long time @cite_28 @cite_16 @cite_29 @cite_14 @cite_8 @cite_7 @cite_13 @cite_2 . However, such works transform the natural language text to a logical form and then use formal reasoning to perform the abduction. On the contrary, our system performs abduction over natural language text without translating the texts to a logical form.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_29", "@cite_1", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2158937425", "1550791699", "1569107316", "103216324", "2951922096", "1481897178", "2889472770", "140581939", "2741412196", "2166276234", "2142115151" ], "abstract": [ "Abstract Abduction is inference to the best explanation. In the TACITUS project at SRI we have developed an approach to abductive inference, called “weighted abduction”, that has resulted in a significant simplification of how the problem of interpreting texts is conceptualized. The interpretation of a text is the minimal explanation of why the text would be true. More precisely, to interpret a text, one must prove the logical form of the text from what is already mutually known, allowing for coercions, merging redundancies where possible, and making assumptions where necessary. It is shown how such “local pragmatics” problems as reference resolution, the interpretation of compound nominals, the resolution of syntactic ambiguity and metonymy, and schema recognition can be solved in this manner. Moreover, this approach of “interpretation as abduction” can be combined with the older view of “parsing as deduction” to produce an elegant and thorough integration of syntax, semantics, and pragmatics, one that spans the range of linguistic phenomena from phonology to discourse structure. Finally, we discuss means for making the abduction process efficient, possibilities for extending the approach to other pragmatics phenomena, and the semantics of the weights and costs in the abduction scheme.", "UC (UNIX Consultant) is an intelligent, natural language interface that allows naive users to learn about the UNIX2 operating system. UC was undertaken because the task was thought to be both a fertile domain for artificial intelligence (AI) research and a useful application of AI work in planning, reasoning, natural language processing, and knowledge representation.The current implementation of UC comprises the following components: a language analyzer, called ALANA, produces a representation of the content contained in an utterance; an inference component, called a concretion mechanism, that further refines this content; a goal analyzer, PAGAN, that hypothesizes the plans and goals under which the user is operating; an agent, called UCEgo, that decides on UC's goals and proposes plans for them; a domain planner, called KIP, that computes a plan to address the user's request; an expression mechanism, UCExpress, that determines the content to be communicated to the user, and a language production mechanism, UCGen, that expresses UC's response in English.UC also contains a component, called KNOME, that builds a model of the user's knowledge state with respect to UNIX. Another mechanism, UCTeacher, allows a user to add knowledge of both English vocabulary and facts about UNIX to UC's knowledge base. This is done by interacting with the user in natural language.All these aspects of UC make use of knowledge represented in a knowledge representation system called KODIAK. KODIAK is a relation-oriented system that is intended to have wide representational range and a clear semantics, while maintaining a cognitive appeal. All of UC's knowledge, ranging from its most general concepts to the content of a particular utterance, is represented in KODIAK.", "", "An effective story undcrstander must be able to reason about characters in the story, their affects, actions, plans, and goals, as well as the settings and important points of the story. In many systems this has been done with separate inference mechanisms for each class of knowledge structure. This paper proposes a story understander with a unified frame-based inference component used on each class of knowledge structure.", "Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query rewriting, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.", "", "Open-domain question answering remains a challenging task as it requires models that are capable of understanding questions and answers, collecting useful information, and reasoning over evidence. Previous work typically formulates this task as a reading comprehension or entailment problem given evidence retrieved from search engines. However, existing techniques struggle to retrieve indirectly related evidence when no directly related evidence is provided, especially for complex questions where it is hard to parse precisely what the question asks. In this paper we propose a retriever-reader model that learns to attend on essential terms during the question answering process. We build (1) an essential term selector which first identifies the most important words in a question, then reformulates the query and searches for related evidence; and (2) an enhanced reader that distinguishes between essential terms and distracting words to predict the answer. We evaluate our model on multiple open-domain multiple-choice QA datasets, notably performing at the level of the state-of-the-art on the AI2 Reasoning Challenge (ARC) dataset.", "We present a semantics for interpreting probabilistic statements expressed in a first-order quantifier-free language. We show how this semantics places constraints on the probabilities which can be associated with such statements. We then consider its use in the area of story understanding. We show that for at least simple models of stories (equivalent to the script plan models) there arc ways to specify reasonably good probabilities. Lastly, we show that while the semantics dictates seemingly implausibly low prior probabilities for equality statements, once they are conditioned by an assumption of spatio-temporal locality of observation the probabilities become \"reasonable.\"", "", "The problem of deciding what was implied by a written text, of \"reading between the lines\" is the problem of inference. To extract proper inferences from a text requires a great deal of general knowledge on the part of the reader. Past approaches have often postulated an algorithm tuned to process a particular kind of knowledge structure (such as a script, or a plan). An alternative, unified approach is proposed. The algorithm recognizes six very general classes of inference, classes that are not dependent on individual knowledge structures, but instead rely on patterns of connectivity between concepts. The complexity has been effectively shifted from the algorithm to the knowledge base; new kinds of knowledge structures can be added without modifying the algorithm.", "We propose that logic (enhanced to encode probability information) is a good way of characterizing semantic interpretation. In support of this we give a fragment of an axiomatization for word-sense disambiguation, nounphrase (and verb) reference, and case disambiguation. We describe an inference engine (Frail3) which actually takes this axiomatization and uses it to drive the semantic interpretation process. We claim three benefits from this scheme. First, the interface between semantic interpretation and pragmatics has always been problematic, since all of the above tasks in general require pragmatic inference. Now the interface is trival, since both semantic interpretation and pragmatics use the same vocabulary and inference engine. The second benefit, related to the first, is that semantic guidance of syntax is a side effect of the interpretation. The third benefit is the elegance of the semantic interpretation theory. A few simple rules capture a remarkable diversity of semantic phenomena." ] }
1907.10903
2963481198
Existing Graph Convolutional Networks (GCNs) are shallow---the number of the layers is usually not larger than 2. The deeper variants by simply stacking more layers, unfortunately perform worse, even involving well-known tricks like weight penalizing, dropout, and residual connections. This paper reveals that developing deep GCNs mainly encounters two obstacles: and . The over-fitting issue weakens the generalization ability on small graphs, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. Hence, we propose DropEdge, a novel technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer. More importantly, DropEdge enables us to recast a wider range of Convolutional Neural Networks (CNNs) from the image field to the graph domain; in particular, we study DenseNet and InceptionNet in this paper. Extensive experiments on several benchmarks demonstrate that our method allows deep GCNs to achieve promising performance, even when the number of layers exceeds 30---the deepest GCN that has ever been proposed.
Inspired by the huge success of CNNs in computer vision, a large number of methods come redefining the notion of convolution on graphs under the umbrella of GCNs. The first prominent research on GCNs is presented in @cite_25 , which develops graph convolution based on spectral graph theory. Later, @cite_14 @cite_17 @cite_8 @cite_9 @cite_15 apply improvements, extensions, and approximations on spectral-based GCNs. With contending the scalability issue of spectral-based GCNs on large graphs, spatial-based GCNs have been rapidly developed @cite_5 @cite_24 @cite_31 @cite_11 . These methods directly perform convolution in the graph domain by aggregating the information from neighbor nodes. By recent, several sampling-based methods have been proposed for fast graph representation learning, including the node-wise sampling methods @cite_5 , the layer-wise approach @cite_7 and its layer-dependent variant @cite_28 .
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_11", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_24", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2406128552", "", "2809418595", "2786915849", "637153065", "2890703109", "2963017945", "2558460151", "2962767366", "2618170429", "1662382123", "" ], "abstract": [ "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "", "Convolutional neural networks (CNNs) have achieved great success on grid-like data such as images, but face tremendous challenges in learning from more generic data such as graphs. In CNNs, the trainable local filters enable the automatic extraction of high-level features. The computation with filters requires a fixed number of ordered units in the receptive fields. However, the number of neighboring units is neither fixed nor are they ordered in generic graphs, thereby hindering the applications of convolutional operations. Here, we address these challenges by proposing the learnable graph convolutional layer (LGCL). LGCL automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. To enable model training on large-scale graphs, we propose a sub-graph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions. Our experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that our methods can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network datasets. Our results also indicate that the proposed methods using sub-graph training strategy are more efficient as compared to prior approaches.", "The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. This model, however, was originally designed to be learned with the presence of both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.", "Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "", "Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification and matrix completion tasks.", "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "" ] }
1907.10903
2963481198
Existing Graph Convolutional Networks (GCNs) are shallow---the number of the layers is usually not larger than 2. The deeper variants by simply stacking more layers, unfortunately perform worse, even involving well-known tricks like weight penalizing, dropout, and residual connections. This paper reveals that developing deep GCNs mainly encounters two obstacles: and . The over-fitting issue weakens the generalization ability on small graphs, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. Hence, we propose DropEdge, a novel technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer. More importantly, DropEdge enables us to recast a wider range of Convolutional Neural Networks (CNNs) from the image field to the graph domain; in particular, we study DenseNet and InceptionNet in this paper. Extensive experiments on several benchmarks demonstrate that our method allows deep GCNs to achieve promising performance, even when the number of layers exceeds 30---the deepest GCN that has ever been proposed.
Despite the fruitful progress, most previous works only focus on shallow GCNs while the deeper extension is seldom discussed. The work by @cite_30 first introduces the concept of over-smoothing in GCNs, but it never proposes a deep GCN with addressing this issue. Its following study @cite_22 solves over-smoothing by using personalized PageRank that additionally involves the rooted node into the message passing loop; however, the accuracy is still observed to decrease when the depth of GCN increases from 2. The JKNet @cite_10 employs skip connections for multi-hop message passing, and it enables different neighborhood ranges for better structure-aware representation learning. Unexpectedly, as shown in the experiments, the JKNets that obtain the best accuracy have depth less than 3 on all datasets, except the one on Cora where the best result is given by the 6-layer network. In this paper, we propose the notion of DropEdge to overcome both the over-fitting and over-smoothing issues simultaneously, and combine it with various backbone architectures to drive an in-depth analysis on deep GCNs.
{ "cite_N": [ "@cite_30", "@cite_10", "@cite_22" ], "mid": [ "2784814091", "", "2949945331" ], "abstract": [ "Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires a considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.", "", "Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online." ] }
1907.10861
2962969695
For any positive real number @math , the @math -frame potential of @math unit vectors @math is defined as @math . In this paper, we focus on the special case @math and establish the unique minimizer of @math for @math . Our results completely solve the minimization problem of @math -frame potential when @math , which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou.
For any @math , Ehler and Okoudjou provided another bound in @cite_12 : where the equality holds if and only if @math is an equiangular tight frame (ETF) in @math @cite_1 @cite_15 . We take @math as an example. Since there always exist @math unit vectors in @math forming an ETF @cite_3 , then the set of these @math vectors is the minimizer of the @math -frame potential for @math .
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_1", "@cite_12" ], "mid": [ "2072565885", "", "1665316997", "2098598867" ], "abstract": [ "We study frames from the viewpoint of coding theory. We introduce a numerical measure of how well a frame reconstructs vectors when some of the frame coefficients of a vector are lost and then attempt to find and classify the frames that are optimal in this setting.", "", "Quantum key distribution protocols based on equiangular spherical codes are introducedand their behavior under the intercept resend attack investigated. Such protocols offera greater range of secure noise tolerance and speed options than protocols based ontheir cousins, the mutually-unbiased bases, while also enabling the determination of thechannel noise rate without the need to sacrifice key bits. For fixed number of signalstates in a given dimension, the spherical code protocols offer Alice and Bob more noisetolerance at the price of slower key generation rates.", "We investigate the optimal configurations of n points on the unit sphere for a class of potential functions. In particular, we characterize these optimal configurations in terms of their approximation properties within frame theory. Furthermore, we consider similar optimal configurations in terms of random distribution of points on the sphere. In this probabilistic setting, we characterize these optimal distributions by means of probabilistic frames. Our work also indicates some connections between statistical shape analysis and frame theory." ] }
1907.10861
2962969695
For any positive real number @math , the @math -frame potential of @math unit vectors @math is defined as @math . In this paper, we focus on the special case @math and establish the unique minimizer of @math for @math . Our results completely solve the minimization problem of @math -frame potential when @math , which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou.
However, when @math , not much is known except few special cases. In @cite_12 , Ehler and Okoudjou solved the simplest case where @math and @math and also proved that the minimizer of the @math -frame potential is exactly @math copies of an orthonormal basis if @math where @math is a positive integer. In @cite_4 , Glazyrin provided a lower bound for any @math : but the condition under which the equality holds is very harsh. In @cite_13 , Chen, Gonzales, Goodman, Kang and Okoudjou considered this special case where @math . Particularly, numerical experiments in @cite_13 show that the set @math , which is called lifted ETF, seems to be the minimizer of the @math -frame potential where @math is an integer depending on @math . Here, @math is defined as a set of @math unit vectors in @math satisfying . . Note that @math actually forms an ETF in some subspace @math with dimension @math and the rest of @math vectors form an orthonormal basis in the orthogonal complement space of @math .
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_12" ], "mid": [ "2935225848", "2940712159", "2098598867" ], "abstract": [ "Given @math and @math we consider a family of functionals, the @math -frame potentials FP @math , defined on the set of all collections of @math unit-norm vectors in @math . For the special case @math and @math , both the minima and the minimizers of these potentials have been thoroughly investigated. In this paper, we investigate the minimizers of the functionals FP @math , by first establishing some general properties of their minima. Thereafter, we focus on the special case @math , for which, surprisingly, not much is known. One of our main results establishes the unique minimizer for big enough @math . Moreover, this minimizer is universal in the sense that it minimizes a large range of energy functions that includes the @math -frame potential. We conclude the paper by reporting some numerical experiments for the case @math , @math , @math . These experiments lead to some conjectures that we pose.", "In this paper, we use the linear programming approach to find new upper bounds for the moments of isotropic measures. These bounds are then utilized for finding lower packing bounds and energy bounds for projective codes. We also show that the obtained energy bounds are sharp for several infinite families of codes.", "We investigate the optimal configurations of n points on the unit sphere for a class of potential functions. In particular, we characterize these optimal configurations in terms of their approximation properties within frame theory. Furthermore, we consider similar optimal configurations in terms of random distribution of points on the sphere. In this probabilistic setting, we characterize these optimal distributions by means of probabilistic frames. Our work also indicates some connections between statistical shape analysis and frame theory." ] }
1907.10861
2962969695
For any positive real number @math , the @math -frame potential of @math unit vectors @math is defined as @math . In this paper, we focus on the special case @math and establish the unique minimizer of @math for @math . Our results completely solve the minimization problem of @math -frame potential when @math , which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou.
The cases @math and @math for Conjecture are already solved in @cite_12 and @cite_9 , respectively. The first new result for Conjecture is obtained by Glazyrin in @cite_14 who shows that an orthonormal basis in @math plus a repeated vector minimizes @math for any @math . Combining Glazyrin's result with the previous ones, the minimizer of @math is only known for @math . Recently, Park extented Glazyrin's result to the case @math where @math , and showed that an orthonormal basis plus @math repeated vectors is the minimizer for any @math (see @cite_5 ). But the minimal @math -frame potential problem remains open for the case @math when @math .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_14", "@cite_12" ], "mid": [ "2916660142", "2182304189", "2908565045", "2098598867" ], "abstract": [ "An extension is given of a recent result of Glazyrin, showing that an orthonormal basis @math joined with the vectors @math , where @math minimizes the @math -frame potential for @math over all collections of @math vectors @math in @math .", "Frames are interesting because they provide decompositions in applications where bases could be a liability. Tight frames are valuable to ensure fast convergence of such decompositions. Normalized frames guarantee control of the frame elements. Finite frames avoid the subtle and omnipresent approximation problems associated with the truncation of infinite frames. In this paper the theory of finite normalized tight frames (FNTFs) is developed. The main theorem is the characterization of all FNTFs in terms of the minima of a potential energy function, which was designed to measure the total orthogonality of a Bessel sequence. Examples of FNTFs abound, e.g., in R3 the vertices of the Platonic solids and of a soccer ball are FNTFs.", "For a set of @math unit vectors @math in @math , by a @math -frame potential we mean @math . In this note, we connect the minimization problem of the @math -frame potential to a certain optimization problem for real functions and find new lower bounds for some values of parameters, particularly, when @math .", "We investigate the optimal configurations of n points on the unit sphere for a class of potential functions. In particular, we characterize these optimal configurations in terms of their approximation properties within frame theory. Furthermore, we consider similar optimal configurations in terms of random distribution of points on the sphere. In this probabilistic setting, we characterize these optimal distributions by means of probabilistic frames. Our work also indicates some connections between statistical shape analysis and frame theory." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
The widest known mathematical model of DCF --- the basic random channel access used in Wi-Fi networks --- was developed by Bianchi in @cite_1 . The model allows estimating maximal throughput, assuming that a constant number of active STAs work in saturated conditions. So the model cannot be used to solve the problems stated in Section , since in these problems the number of active STAs decreases with time. However, paper @cite_1 contains basic principles of Wi-Fi modeling. In particular, it introduces a concept of virtual slot, which is the time interval between consequent backoff counter changes.
{ "cite_N": [ "@cite_1" ], "mid": [ "2162598825" ], "abstract": [ "The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
Paper @cite_0 presents a model, which allows estimating the maximal throughput (again, in saturated scenarios) if all STAs are equally divided into several groups and each slot is assigned to a group. It proves that RAW manifold increases throughput in a network with thousands STAs, however the model can not be used for our problems for aforesaid reasons.
{ "cite_N": [ "@cite_0" ], "mid": [ "2028769336" ], "abstract": [ "In IEEE 802.11 networks, how to improve the efficiency of contention-based media access is an important, challenging issue. Recently, the grouping strategy is introduced in the IEEE 802.11ah standard to alleviate the channel contention. In IEEE 802.11ah networks, stations can be divided into groups and each group is only allowed to access wireless channel during the designated channel access period. By limiting the number of stations participating in the channel contention, it is anticipated that such a grouping strategy could substantially improve the communication efficiency. However, how to allocate the channel among different groups and how to adjust the number and sizes of groups are still open issues. In this paper, we first study the impact of the grouping strategy on the network performance, and then propose an analytical model to track the performance under saturated traffic. The accuracy of our model has been validated by simulation results. Our analytical model and results also provide important guidelines in optimizing grouping parameters." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
In @cite_4 , the authors consider another protocol, IEEE 802.15.4 that uses similar to EDCA channel access. However, in .15.4 a STA senses the channel only when the backoff ends. Although both papers present performance evaluation in the scenario similar to the one described in this paper, the authors assume that collision probability is constant, while in reality, both varying contention window and the number of STAs having packets to transmit make collision probability change with time.
{ "cite_N": [ "@cite_4" ], "mid": [ "2168959209" ], "abstract": [ "In this paper, a mathematical model for the beacon-enabled mode of the IEEE 802.15.4 medium-access control (MAC) protocol is provided. A personal area network (PAN) composed of multiple nodes, which transmit data to a PAN coordinator through direct links or multiple hops, is considered. The application is query based: Upon reception of the beacon transmitted by the PAN coordinator, each node tries to transmit its packet using the superframe structure defined by the IEEE 802.15.4 protocol. Those nodes that do not succeed in accessing the channel discard the packet; at the next superframe, a new packet is generated. The aim of the paper is to develop a flexible mathematical tool able to study beacon-enabled 802.15.4 networks organized in different topologies. Both the contention access period (CAP) and the contention-free period defined by the standard are considered. The slotted carrier-sense multiple access with collision avoidance (CSMA CA) algorithm used in the CAP portion of the superframe is analytically modeled. The model describes the probability of packet successful reception and access delay statistics. Moreover, both star and tree-based topologies are dealt with; a suitable comparison between these topologies is provided. The model is a useful tool for the design of MAC parameters and to select the better topology. The mathematical model is validated through simulation results. The model differs from those previously published by other authors in the literature as it precisely follows the MAC procedure defined by the standard in the context of the application scenario described." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
The authors of @cite_7 study the power saving mechanism. They have developed a model, which allows estimating the average energy consumed by a STA and average time used by a STA to retrieve its data. As shown in , even though the model developed in @cite_7 can be used to find the average frame transmission time for a STA, it can not be used to find the correct time distribution required in problem A. Besides that, it can not be used at all to solve problem B.
{ "cite_N": [ "@cite_7" ], "mid": [ "1991290762" ], "abstract": [ "Communication is an enabling technology for the efficient control and management of next-generation Smart Grids. Energy conservation of the communication devices is essential for future large scale deployment of Smart Grid communication networks. However, existing power save protocols experience high contention in Smart Grid communication networks that have a large number of nodes and periodic traffic. We design a new energy conservation protocol, Power Save with Offset Listen Interval (PS-OLi), to address such contention problems. PS-OLi avoids message collisions by controlling the station wake up time with a calculated offset. A new analytical model is developed to characterize the power save performance of networks with periodic traffic. Simulation results show that our analytical model accurately predicts the collision probability and packet delay. We use our model to evaluate the energy efficiency of PS-OLi and standard power save protocols. Our results show that PS-OLi extends the lifetime of a Smart Grid communication network by more than 10 ." ] }
1907.10786
2963577681
Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation
GANs @cite_6 have brought wide attention in recent years. The efforts made to improve GANs lie in various aspects, including designing better objective functions @cite_22 @cite_12 , improving synthesis diversity @cite_14 @cite_33 @cite_8 , image resolution @cite_24 @cite_7 , as well as training stability @cite_11 @cite_26 @cite_0 . Despite this tremendous success, little work has been done on understanding what GANs have learned in the process of synthesizing the real visual world. Prior work @cite_22 @cite_35 observed the vector arithmetic property in the latent space. Bau al @cite_5 analyzed GANs by visualizing the spatial feature map and understanding the behavior of different units in intermediate layers. However, detailed study on the fine-grained relationship between input latent space and semantic attributes of output images is still missing.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_33", "@cite_22", "@cite_8", "@cite_7", "@cite_6", "@cite_24", "@cite_0", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2963870144", "2950893734", "2962879692", "2963836885", "2963684088", "2893749619", "2904367110", "2099471712", "2766527293", "2605195953", "", "2964024144", "" ], "abstract": [ "We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, DFI relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like make older younger, make bespectacled, add smile, among others, surprisingly well&#x2013;sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging after the advent of deep learning.", "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.", "One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.", "Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "", "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.", "" ] }
1907.10786
2963577681
Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation
Besides improving GANs to synthesize images in an unconditional way, plenty of work has been done to control the contents and attributes of the outputs. CGAN @cite_31 was firstly proposed to add constraints into the training procedure. Specifically, additional label together with the random latent code is fed into the generator, and then used as supervision to ensure that GAN outputs image with desired category. In this way, latent code and the auxiliary label are considered as decomposed such that changing one item will not affect the other. This idea is further extended with more carefully designed loss functions @cite_34 @cite_27 , introduction of semantic attribute features @cite_19 @cite_21 @cite_3 @cite_10 , as well as novel architectures @cite_4 @cite_13 to improve the disentanglement and synthesis quality. However, all these approaches require additional information involved in GAN learning. InfoGAN @cite_25 learned disentangled latent space unsupervisedly by adding regularizers to the generator to maximize the mutual information. Different from previous learning-based methods, this work explores the disentanglement of semantics in the latent space of unconstrained GANs without any retraining or redesigning the models themselves.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_21", "@cite_3", "@cite_19", "@cite_27", "@cite_31", "@cite_34", "@cite_10", "@cite_25" ], "mid": [ "2799209711", "2962975391", "", "2964118024", "2963100452", "2737047298", "2125389028", "2548275288", "2902483944", "2963226019" ], "abstract": [ "Face synthesis has achieved advanced development by using generative adversarial networks (GANs). Existing methods typically formulate GAN as a two-player game, where a discriminator distinguishes face images from the real and synthesized domains, while a generator reduces its discriminativeness by synthesizing a face of photorealistic quality. Their competition converges when the discriminator is unable to differentiate these two domains. Unlike two-player GANs, this work generates identity-preserving faces by proposing FaceID-GAN, which treats a classifier of face identity as the third player, competing with the generator by distinguishing the identities of the real and synthesized faces (see Fig.1). A stationary point is reached when the generator produces faces that have high quality as well as preserve identity. Instead of simply modeling the identity classifier as an additional discriminator, FaceID-GAN is formulated by satisfying information symmetry, which ensures that the real and synthesized images are projected into the same feature space. In other words, the identity classifier is used to extract identity features from both input (real) and output (synthesized) face images of the generator, substantially alleviating training difficulty of GAN. Extensive experiments show that FaceID-GAN is able to generate faces of arbitrary viewpoint while preserve identity, outperforming recent advanced approaches.", "We propose a new algorithm for training generative adversarial networks to jointly learn latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). In practice, this means that by fixing the identity portion of latent codes, we can generate diverse images of the same subject, and by fixing the observation portion we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce images that are both photorealistic, distinct, and appear to depict the same person. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to accommodate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs.", "", "Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https: github.com Prinsphield ELEGANT.", "Despite recent advances in face recognition using deep learning, severe accuracy drops are observed for large pose variations in unconstrained environments. Learning pose-invariant features is one solution, but needs expensively labeled large-scale data and carefully designed feature learning algorithms. In this work, we focus on frontalizing faces in the wild under various head poses, including extreme profile view's. We propose a novel deep 3D Morphable Model (3DMM) conditioned Face Frontalization Generative Adversarial Network (GAN), termed as FF-GAN, to generate neutral head pose face images. Our framework differs from both traditional GANs and 3DMM based modeling. Incorporating 3DMM into the GAN structure provides shape and appearance priors for fast convergence with less training data, while also supporting end-to-end training. The 3DMM-conditioned GAN employs not only the discriminator and generator loss but also a new masked symmetry loss to retain visual quality under occlusions, besides an identity loss to recover high frequency information. Experiments on face recognition, landmark localization and 3D reconstruction consistently show the advantage of our frontalization method on faces in the wild datasets. 1", "The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "The advance of Generative Adversarial Networks (GANs) enables realistic face image synthesis. However, synthesizing face images that preserve facial identity as well as have high diversity within each identity remains challenging. To address this problem, we present FaceFeat-GAN, a novel generative model that improves both image quality and diversity by using two stages. Unlike existing single-stage models that map random noise to image directly, our two-stage synthesis includes the first stage of diverse feature generation and the second stage of feature-to-image rendering. The competitions between generators and discriminators are carefully designed in both stages with different objective functions. Specially, in the first stage, they compete in the feature domain to synthesize various facial features rather than images. In the second stage, they compete in the image domain to render photo-realistic images that contain high diversity but preserve identity. Extensive experiments show that FaceFeat-GAN generates images that not only retain identity information but also have high diversity and quality, significantly outperforming previous methods.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657." ] }
1907.10786
2963577681
Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation
Latent space is treated as Riemannian manifolds by recent work @cite_23 @cite_18 @cite_36 . They focus on exploring how to make the output image vary more smoothly through interpolation in latent space. This idea is improved in @cite_17 by employing feature-based metrics as the path length in image space. Some work @cite_28 observed that the linear paths in latent space can closely approximate geodesics on generated manifold. There are also some methods targeting at the inversion from image space back to latent space @cite_32 @cite_20 @cite_15 for better image manipulation. GLO @cite_9 optimized the generator and latent code simultaneously to learn a better latent space. Unlike them, this paper studies the latent space by probing the hidden semantic subspaces using linear attribute classifiers. Some concurrent work also explore the semantics in latent space of GANs for image manipulation: @cite_30 studied the steerability of GAN model by shifting the latent distribution and achieved the control of camera motion and image color tone, while @cite_1 improved the memoriability of the output image via varying the latent code.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_36", "@cite_28", "@cite_9", "@cite_1", "@cite_32", "@cite_23", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2959108703", "2964011399", "2804013387", "2964231450", "2737057113", "2950419363", "2552611751", "", "2890223728", "2519536754", "2909751683" ], "abstract": [ "An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by \"steering\" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL", "Deep generative models provide a systematic way to learn nonlinear data distributions through a set of latent variables and a nonlinear \"generator\" function that maps latent points into the input space. The nonlinearity of the generator implies that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and we demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalizes to other deep generative models.", "Given data, deep generative models, such as variational autoencoders (VAE) and generative adversarial networks (GAN), train a lower dimensional latent representation of the data space. The linear Euclidean geometry of data space pulls back to a nonlinear Riemannian geometry on the latent space. The latent space thus provides a low-dimensional nonlinear representation of data and classical linear statistical techniques are no longer applicable. In this paper we show how statistics of data in their latent space representation can be performed using techniques from the field of nonlinear manifold statistics. Nonlinear manifold statistics provide generalizations of Euclidean statistical notions including means, principal component analysis, and maximum likelihood fits of parametric probability distributions. We develop new techniques for maximum likelihood inference in latent space, and adress the computational complexity of using geometric algorithms with high-dimensional data by training a separate neural network to approximate the Riemannian metric and cometric tensor capturing the shape of the learned data manifold.", "Deep generative models learn a mapping from a low-dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion of distance between points on the manifold. Second, we develop an algorithm for parallel translation of a tangent vector along a path on the manifold. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. The practical implication is that linear paths in the latent space closely approximate geodesics on the generated manifold.", "Generative Adversarial Networks (GANs) have been shown to be able to sample impressively realistic images. GAN training consists of a saddle point optimization problem that can be thought of as an adversarial game between a generator which produces the images, and a discriminator, which judges if the images are real. Both the generator and the discriminator are commonly parametrized as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of the optimization procedure and the network parametrization to the success of GANs. To this end we introduce and study Generative Latent Optimization (GLO), a framework to train a generator without the need to learn a discriminator, thus avoiding challenging adversarial optimization problems. We show experimentally that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.", "We introduce a framework that uses Generative Adversarial Networks (GANs) to study cognitive properties like memorability, aesthetics, and emotional valence. These attributes are of interest because we do not have a concrete visual definition of what they entail. What does it look like for a dog to be more or less memorable? GANs allow us to generate a manifold of natural-looking images with fine-grained differences in their visual attributes. By navigating this manifold in directions that increase memorability, we can visualize what it looks like for a particular generated image to become more or less memorable. The resulting visual definitions\" surface image properties (like object size\") that may underlie memorability. Through behavioral experiments, we verify that our method indeed discovers image manipulations that causally affect human memory performance. We further demonstrate that the same framework can be used to analyze image aesthetics and emotional valence. Visit the GANalyze website at this http URL.", "Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.", "", "In this work, we present new theoretical results on convolutional generative neural networks, in particular their invertibility (i.e., the recovery of input latent code given the network output). This inversion problem is highly non-convex, which is in general computationally challenging and has no performance guarantee. However, we rigorously prove that, even when the network output is only partially observed (e.g., with missing pixels), the input of a two-layer convolutional generative network can always be computed from the network output, using simple gradient descent. This new theoretical finding implies that the mapping from the low-dimensional latent space to the high-dimensional image space is bijective (i.e., one-to-one). Our theorem holds for 2-layer convolutional generative network with relu as the activation function, but we demonstrate that the same conclusion empirically extends to multi-layer networks and networks with other activation functions (including the leaky relu, sigmoid and tanh). Our proof is built on our newly proposed permutation technique, which can potentially be generalized to networks with multiple layers and in other theoretical studies on convolutional neural networks, and thus is a merit on its own.", "Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to “fall off” the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user’s scribbles.", "" ] }
1907.10801
2963861395
Automatic image aesthetics assessment is important for a wide variety of applications such as on-line photo suggestion, photo album management and image retrieval. Previous methods have focused on mapping the holistic image content to a high or low aesthetics rating. However, the composition information of an image characterizes the harmony of its visual elements according to the principles of art, and provides richer information for learning aesthetics. In this work, we propose to model the image composition information as the mutual dependency of its local regions, and design a novel architecture to leverage such information to boost the performance of aesthetics assessment. To achieve this, we densely partition an image into local regions and compute aesthetics-preserving features over the regions to characterize the aesthetics properties of image content. With the feature representation of local regions, we build a region composition graph in which each node denotes one region and any two nodes are connected by an edge weighted by the similarity of the region features. We perform reasoning on this graph via graph convolution, in which the activation of each node is determined by its highly correlated neighbors. Our method naturally uncovers the mutual dependency of local regions in the network training procedure, and achieves the state-of-the-art performance on the benchmark visual aesthetics datasets.
. Modeling the relations of different visual components in visual data has been proven effective in computer vision community. Ma . @cite_14 proposed to model higher-order object interactions with attention mechanism for understanding the actions in videos. Wang . @cite_29 proposed to represent video as a space-time graph which captures temporal dynamics and functional relations between human and object, and then apply graph convolution over the video graph to learn the long range dependencies among the human object entities in the video. @cite_27 proposed a non-local operation for capturing the long-range dependencies among visual elements, and achieved the state-of-the-art results on various computer vision tasks. In image segmentation, modeling the contextual dependency of the local segments with (CRF) @cite_20 has become an inevitable step to achieve good performance. Methodologically, our method is closely related to the relation reasoning networks @cite_53 @cite_52 @cite_57 in machine learning community, which were originally proposed to deal with structured data such as texts and speeches. In particular, we are motivated by @cite_51 due to its recent success in computer vision community @cite_46 @cite_30 . We adopt graph convolution operation as the region dependency modeling mechanism in our aesthetics model, leading to the state-of-the-art results.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_29", "@cite_53", "@cite_52", "@cite_57", "@cite_27", "@cite_46", "@cite_51", "@cite_20" ], "mid": [ "", "", "2806331055", "2890671550", "", "", "", "2791092480", "2116341502", "2161236525" ], "abstract": [ "", "", "How do humans recognize the action “opening a book”? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on the Charades and Something-Something datasets. Especially for Charades with complex environments, we obtain a huge (4.4 ) gain when our model is applied in complex environments.", "Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures. It causes that CNNs are allowed to manage data with Euclidean or grid-like structures (e.g., images), not ones with non-Euclidean or graph structures (e.g., traffic networks). To broaden the reach of CNNs, we develop structure-aware convolution to eliminate the invariance, yielding a unified mechanism of dealing with both Euclidean and non-Euclidean structured data. Technically, filters in the structure-aware convolution are generalized to univariate functions, which are capable of aggregating local inputs with diverse topological structures. Since infinite parameters are required to determine a univariate function, we parameterize these filters with numbered learnable parameters in the context of the function approximation theory. By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established. Extensive experiments on eleven datasets strongly evidence that SACNNs outperform current models on various machine learning tasks, including image classification and clustering, text categorization, skeleton-based action recognition, molecular activity detection, and taxi flow prediction. Code will be available.", "", "", "", "Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. CNNs do not easily extend, however, to data that are not represented by regular grids, such as 3D shape meshes or other graph-structured data, to which traditional local convolution operators do not directly apply. To address this problem, we propose a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of our approach is that these correspondences are dynamically computed from features learned by the network, rather than relying on predefined static coordinates over the graph as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results. This shows that our approach can learn effective shape representations from raw input coordinates, without relying on shape descriptors.", "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy." ] }
1907.10265
2962741219
Cyber-physical system applications such as autonomous vehicles, wearable devices, and avionic systems generate a large volume of time-series data. Designers often look for tools to help classify and categorize the data. Traditional machine learning techniques for time-series data offer several solutions to solve these problems; however, the artifacts trained by these algorithms often lack interpretability. On the other hand, temporal logics, such as Signal Temporal Logic (STL) have been successfully used in the formal methods community as specifications of time-series behaviors. In this work, we propose a new technique to automatically learn temporal logic formulae that are able to cluster and classify real-valued time-series data. Previous work on learning STL formulas from data either assumes a formula-template to be given by the user, or assumes some special fragment of STL that enables exploring the formula structure in a systematic fashion. In our technique, we relax these assumptions, and provide a way to systematically explore the space of all STL formulas. As the space of all STL formulas is very large, and contains many semantically equivalent formulas, we suggest a technique to heuristically prune the space of formulas considered. Finally, we illustrate our technique on various case studies from the automotive, transportation and healthcare domain.
There has been considerable recent work on learning STL formulas from data for various applications such as supervised learning @cite_28 @cite_31 , clustering @cite_15 @cite_33 , or anomaly detection @cite_17 .
{ "cite_N": [ "@cite_33", "@cite_15", "@cite_28", "@cite_31", "@cite_17" ], "mid": [ "2963427179", "2964140991", "2339807279", "2086092403", "2086359741" ], "abstract": [ "Cyber-physical systems of today are generating large volumes of time-series data. As manual inspection of such data is not tractable, the need for learning methods to help discover logical structure in the data has increased. We propose a logic-based framework that allows domain-specific knowledge to be embedded into formulas in a parametric logical specification over time-series data. The key idea is to then map a time series to a surface in the parameter space of the formula. Given this mapping, we identify the Hausdorff distance between surfaces as a natural distance metric between two time-series data under the lens of the parametric specification. This enables embedding non-trivial domain-specific knowledge into the distance metric and then using off-the-shelf machine learning tools to label the data. After labeling the data, we demonstrate how to extract a logical specification for each label. Finally, we showcase our technique on real world traffic data to learn classifiers monitors for slow-downs and traffic jams.", "", "This paper introduces a framework for inference of timed temporal logic properties from data. The dataset is given as a finite set of pairs of finite-time system traces and labels, where the labels indicate whether the traces exhibit some desired behavior (e.g., a ship traveling along a safe route). We propose a decision-tree based approach for learning signal temporal logic classifiers. The method produces binary decision trees that represent the inferred formulae. Each node of the tree contains a test associated with the satisfaction of a simple formula, optimally tuned from a predefined finite set of primitives. Optimality is assessed using heuristic impurity measures, which capture how well the current primitive splits the data with respect to the traces' labels. We propose extensions of the usual impurity measures from machine learning literature to handle classification of system traces by leveraging upon the robustness degree concept. The proposed incremental construction procedure greatly improves the execution time and the accuracy compared to existing algorithms. We present two case studies that illustrate the usefulness and the computational advantages of the algorithms. The first is an anomaly detection problem in a maritime environment. The second is a fault detection problem in an automotive powertrain system.", "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "As the complexity of cyber-physical systems increases, so does the number of ways an adversary can disrupt them. This necessitates automated anomaly detection methods to detect possible threats. In this paper, we extend our recent results in the field of inference via formal methods to develop an unsupervised learning algorithm. Our procedure constructs from data a signal temporal logic (STL) formula that describes normal system behavior. Trajectories that do not satisfy the learned formula are flagged as anomalous. STL can be used to formulate properties such as “If the train brakes within 500 m of the platform at a speed of 50 km hr, then it will stop in at least 30 s and at most 50 s.” STL gives a more human-readable representation of behavior than classifiers represented as surfaces in high-dimensional feature spaces. STL formulae can also be used for early detection via online monitoring and for anomaly mitigation via formal synthesis. We demonstrate the power of our method with a physical model of a train's brake system. To our knowledge, this paper is the first instance of formal methods being applied to anomaly detection." ] }
1907.10265
2962741219
Cyber-physical system applications such as autonomous vehicles, wearable devices, and avionic systems generate a large volume of time-series data. Designers often look for tools to help classify and categorize the data. Traditional machine learning techniques for time-series data offer several solutions to solve these problems; however, the artifacts trained by these algorithms often lack interpretability. On the other hand, temporal logics, such as Signal Temporal Logic (STL) have been successfully used in the formal methods community as specifications of time-series behaviors. In this work, we propose a new technique to automatically learn temporal logic formulae that are able to cluster and classify real-valued time-series data. Previous work on learning STL formulas from data either assumes a formula-template to be given by the user, or assumes some special fragment of STL that enables exploring the formula structure in a systematic fashion. In our technique, we relax these assumptions, and provide a way to systematically explore the space of all STL formulas. As the space of all STL formulas is very large, and contains many semantically equivalent formulas, we suggest a technique to heuristically prune the space of formulas considered. Finally, we illustrate our technique on various case studies from the automotive, transportation and healthcare domain.
In @cite_31 , a fragment of PSTL (rPSTL or reactive parametric signal temporal logic) is defined to capture causal relationships from data. However, there are some temporal properties namely, concurrent eventuality and nested always eventually that cannot be described directly in rPSTL. In @cite_17 , the authors extend @cite_31 by using a fragment of rPSTL, inference parametric STL (iPSTL), that does not require a causal structure. In this work, classical ML algorithms (one-class support vector machines) are applied for unsupervised learning problem. In @cite_28 , a decision tree based method is employed to learn STL formulas, which creates a map between a restricted fragment of STL and a binary decision tree in order to build a STL classifier. While this seminal work has advanced work in the intersection of formal methods and machine learning, one disadvantage of these approaches has been that they lead to long formulas which can become an issue for interpretability.
{ "cite_N": [ "@cite_28", "@cite_31", "@cite_17" ], "mid": [ "2339807279", "2086092403", "2086359741" ], "abstract": [ "This paper introduces a framework for inference of timed temporal logic properties from data. The dataset is given as a finite set of pairs of finite-time system traces and labels, where the labels indicate whether the traces exhibit some desired behavior (e.g., a ship traveling along a safe route). We propose a decision-tree based approach for learning signal temporal logic classifiers. The method produces binary decision trees that represent the inferred formulae. Each node of the tree contains a test associated with the satisfaction of a simple formula, optimally tuned from a predefined finite set of primitives. Optimality is assessed using heuristic impurity measures, which capture how well the current primitive splits the data with respect to the traces' labels. We propose extensions of the usual impurity measures from machine learning literature to handle classification of system traces by leveraging upon the robustness degree concept. The proposed incremental construction procedure greatly improves the execution time and the accuracy compared to existing algorithms. We present two case studies that illustrate the usefulness and the computational advantages of the algorithms. The first is an anomaly detection problem in a maritime environment. The second is a fault detection problem in an automotive powertrain system.", "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "As the complexity of cyber-physical systems increases, so does the number of ways an adversary can disrupt them. This necessitates automated anomaly detection methods to detect possible threats. In this paper, we extend our recent results in the field of inference via formal methods to develop an unsupervised learning algorithm. Our procedure constructs from data a signal temporal logic (STL) formula that describes normal system behavior. Trajectories that do not satisfy the learned formula are flagged as anomalous. STL can be used to formulate properties such as “If the train brakes within 500 m of the platform at a speed of 50 km hr, then it will stop in at least 30 s and at most 50 s.” STL gives a more human-readable representation of behavior than classifiers represented as surfaces in high-dimensional feature spaces. STL formulae can also be used for early detection via online monitoring and for anomaly mitigation via formal synthesis. We demonstrate the power of our method with a physical model of a train's brake system. To our knowledge, this paper is the first instance of formal methods being applied to anomaly detection." ] }
1907.10265
2962741219
Cyber-physical system applications such as autonomous vehicles, wearable devices, and avionic systems generate a large volume of time-series data. Designers often look for tools to help classify and categorize the data. Traditional machine learning techniques for time-series data offer several solutions to solve these problems; however, the artifacts trained by these algorithms often lack interpretability. On the other hand, temporal logics, such as Signal Temporal Logic (STL) have been successfully used in the formal methods community as specifications of time-series behaviors. In this work, we propose a new technique to automatically learn temporal logic formulae that are able to cluster and classify real-valued time-series data. Previous work on learning STL formulas from data either assumes a formula-template to be given by the user, or assumes some special fragment of STL that enables exploring the formula structure in a systematic fashion. In our technique, we relax these assumptions, and provide a way to systematically explore the space of all STL formulas. As the space of all STL formulas is very large, and contains many semantically equivalent formulas, we suggest a technique to heuristically prune the space of formulas considered. Finally, we illustrate our technique on various case studies from the automotive, transportation and healthcare domain.
In template-based techniques, a fixed PSTL template is provided by the user, and the techniques only learn the values of parameters associated with the PSTL. In @cite_15 , a total ordering on parameter space of PSTL specifications is utilized as feature vectors for learning logical specifications. Unfortunately, recognizing the best total ordering is not straightforward for users. In @cite_33 , the authors eliminate this additional burden on the user by suggesting a method that maps timed traces to a surface in the parameter space of the formula, and then employing these curves as features. In @cite_29 , the input to the algorithm is a requirement template expressed in PSTL, where the traces are actively generated from a model of the system. Our proposed technique, which uses systematic enumeration, can produce smaller formulas which may be more human-interpretable, and with higher accuracy($ 92
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_33" ], "mid": [ "2964140991", "2956034981", "2963427179" ], "abstract": [ "", "Formal verification of a control system can be performed by checking if a model of its dynamical behavior conforms to temporal requirements. Unfortunately, adoption of formal verification in an industrial setting is a formidable challenge as design requirements are often vague, nonmodular, evolving, or sometimes simply unknown. We propose a framework to mine requirements from a closed-loop model of an industrial-scale control system, such as one specified in Simulink. The input to our algorithm is a requirement template expressed in parametric signal temporal logic: a logical formula in which concrete signal or time values are replaced with parameters. Given a set of simulation traces of the model, our method infers values for the template parameters to obtain the strongest candidate requirement satisfied by the traces. It then tries to falsify the candidate requirement using a falsification tool. If a counterexample is found, it is added to the existing set of traces and these steps are repeated; otherwise, it terminates with the synthesized requirement. Requirement mining has several usage scenarios: mined requirements can be used to formally validate future modifications of the model, they can be used to gain better understanding of legacy models or code, and can also help enhancing the process of bug finding through simulations. We demonstrate the scalability and utility of our technique on three complex case studies in the domain of automotive powertrain systems: a simple automatic transmission controller, an air-fuel controller with a mean-value model of the engine dynamics, and an industrial-size prototype airpath controller for a diesel engine. We include results on a bug found in the prototype controller by our method.", "Cyber-physical systems of today are generating large volumes of time-series data. As manual inspection of such data is not tractable, the need for learning methods to help discover logical structure in the data has increased. We propose a logic-based framework that allows domain-specific knowledge to be embedded into formulas in a parametric logical specification over time-series data. The key idea is to then map a time series to a surface in the parameter space of the formula. Given this mapping, we identify the Hausdorff distance between surfaces as a natural distance metric between two time-series data under the lens of the parametric specification. This enables embedding non-trivial domain-specific knowledge into the distance metric and then using off-the-shelf machine learning tools to label the data. After labeling the data, we demonstrate how to extract a logical specification for each label. Finally, we showcase our technique on real world traffic data to learn classifiers monitors for slow-downs and traffic jams." ] }
1907.10218
2963829227
The state-of-the-art federated learning brings a new direction for the data privacy protection of mobile crowdsensing machine learning applications. However, besides being vulnerable to GAN based user data construction attack, the existing gradient descent based federate learning schemes are lack of consideration for how to preserve the model privacy. In this paper, we propose a secret sharing based federated extreme boosting learning frame-work (FedXGB) to achieve privacy-preserving model training for mobile crowdsensing. First, a series of protocols are designed to implement privacy-preserving extreme gradient boosting of classification and regression tree. The protocols preserve the user data privacy protection feature of federated learning that XGBoost is trained without revealing plaintext user data. Then, in consideration of the high commercial value of a well-trained model, a secure prediction protocol is developed to protect the model privacy for the crowdsensing sponsor. Additionally, we operate comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness and efficiency of FedXGB. The results show that FedXGB is secure in the honest-but-curious model, and attains approximate accuracy and convergence rate with the original model in low runtime.
Most of the existing privacy-preserving works for machine learning are data driven and based traditional cryptographic algorithms. For example, Q. Wang @cite_23 proposed a privacy-preserving data mining model learning scheme for canonical correlation analysis in cross-media retrieval system garbled circuit. Z. Ma @cite_9 proposed a lightweight ensemble classification learning framework for the universal face recognition system by exploiting additive secret sharing. Considering the wide applications of gradient boosting decision tree (GDBT) in data mining, L. Zhao @cite_11 utilized the differential privacy technology to implement two novel privacy-preserving schemes for classification and regression tasks, respectively. And towards the patient's medical data privacy protection in e-Health system, X. Liu in @cite_12 advocated a homomorphic encryption based scheme to implement privacy-preserving reinforcement learning scheme for patient-centric dynamic treatment regimes. Due to be data security driven, the above four types of privacy-preserving schemes still have to upload encrypted user data to central server and cause massive extra communication overhead.
{ "cite_N": [ "@cite_9", "@cite_12", "@cite_23", "@cite_11" ], "mid": [ "2922482186", "2911667635", "2762867797", "2793685318" ], "abstract": [ "The development of machine learning technology and visual sensors is promoting the wider applications of face recognition into our daily life. However, if the face features in the servers are abused by the adversary, our privacy and wealth can be faced with great threat. Many security experts have pointed out that, by 3-D-printing technology, the adversary can utilize the leaked face feature data to masquerade others and break the E-bank accounts. Therefore, in this paper, we propose a lightweight privacy-preserving adaptive boosting (AdaBoost) classification framework for face recognition (POR) based on the additive secret sharing and edge computing. First, we improve the current additive secret sharing-based exponentiation and logarithm functions by expanding the effective input range. Then, by utilizing the protocols, two edge servers are deployed to cooperatively complete the ensemble classification of AdaBoost for face recognition. The application of edge computing ensures the efficiency and robustness of POR. Furthermore, we prove the correctness and security of our protocols by theoretic analysis. And experiment results show that, POR can reduce about 58 computation error compared with the existing differential privacy-based framework.", "In this paper, we propose a privacy-preserving reinforcement learning framework for a patient-centric dynamic treatment regime, which we refer to as Preyer. Using Preyer, a patient-centric treatment strategy can be made spontaneously while preserving the privacy of the patient's current health state and the treatment decision. Specifically, we first design a new storage and computation method to support noninteger processing for multiple encrypted domains. A new secure plaintext length control protocol is also proposed to avoid plaintext overflow after executing secure computation repeatedly. Moreover, we design a new privacy-preserving reinforcement learning framework with experience replay to build the model for secure dynamic treatment policymaking. Furthermore, we prove that Preyer facilitates patient dynamic treatment policymaking without leaking sensitive information to unauthorized parties. We also demonstrate the utility and efficiency of Preyer using simulations and analysis.", "A massive explosion of various types of data has been triggered in the “Big Data” era. In big data systems, machine learning plays an important role due to its effectiveness in discovering hidden information and valuable knowledge. Data privacy, however, becomes an unavoidable concern since big data usually involve multiple organizations, e.g., different healthcare systems and hospitals, who are not in the same trust domain and may be reluctant to share their data publicly. Applying traditional cryptographic tools is a straightforward approach to protect sensitive information, but it often renders learning algorithms useless inevitably. In this work, we, for the first time, propose a novel privacy-preserving scheme for canonical correlation analysis (CCA), which is a well-known learning technique and has been widely used in cross-media retrieval system. We first develop a library of building blocks to support various arithmetics over encrypted real numbers by leveraging additively homomorphic encryption and garbled circuits. Then we encrypt private data by randomly splitting the numerical data, formalize CCA problem and reduce it to a symmetric eigenvalue problem by designing new protocols for privacy-preserving QR decomposition. Finally, we solve all the eigenvalues and the corresponding eigenvectors by running Newton-Raphson method and inverse power method over the ciphertext domain. We carefully analyze the security and extensively evaluate the effectiveness of our design. The results show that our scheme is practically secure, incurs negligible errors compared with performing CCA in the clear and performs comparably in cross-media retrieval systems.", "Data mining has heralded the major breakthrough in data analysis, serving as a “super cruncher” to discover hidden information and valuable knowledge in big data systems. For many applications, the collection of big data usually involves various parties who are interested in pooling their private data sets together to jointly train machine-learning models that yield more accurate prediction results. However, data owners may not be willing to disclose their own data due to privacy concerns, making it imperative to provide privacy guarantee in collaborative data mining over distributed data sets. In this paper, we focus on tree-based data mining. To begin with, we design novel privacy-preserving schemes for two most common tasks: regression and binary classification, where individual data owners can perform training locally in a differentially private manner. Then, for the first time, we design and implement a privacy-preserving system for gradient boosting decision tree (GBDT), where different regression trees trained by multiple data owners can be securely aggregated into an ensemble. We conduct extensive experiments to evaluate the performance of our system on multiple real-world data sets. The results demonstrate that our system can provide a strong privacy protection for individual data owners while maintaining the prediction accuracy of the original trained model." ] }
1907.10218
2963829227
The state-of-the-art federated learning brings a new direction for the data privacy protection of mobile crowdsensing machine learning applications. However, besides being vulnerable to GAN based user data construction attack, the existing gradient descent based federate learning schemes are lack of consideration for how to preserve the model privacy. In this paper, we propose a secret sharing based federated extreme boosting learning frame-work (FedXGB) to achieve privacy-preserving model training for mobile crowdsensing. First, a series of protocols are designed to implement privacy-preserving extreme gradient boosting of classification and regression tree. The protocols preserve the user data privacy protection feature of federated learning that XGBoost is trained without revealing plaintext user data. Then, in consideration of the high commercial value of a well-trained model, a secure prediction protocol is developed to protect the model privacy for the crowdsensing sponsor. Additionally, we operate comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness and efficiency of FedXGB. The results show that FedXGB is secure in the honest-but-curious model, and attains approximate accuracy and convergence rate with the original model in low runtime.
Therefore, the federated learning concept was proposed @cite_22 . However, up to now, there were only a few works that adapted the architecture to propose practical schemes for applications @cite_13 . And most existing federated learning schemes still concentrated on the SGD based models. For example, considering the limited bandwidth, precious storage and imperative privacy problem in modern Internet of Things (IoT) environment, S. Wang provided a SGD based federated machine learning architecture based on the edge nodes in @cite_16 . For the privacy-preserving machine learning model training in smart vehicles, S. Sumudu @cite_0 proposed a federated learning based novel joint transmit power and resource allocation approach. And to avoid the adversary to analyze the hidden information about user private data from the uploaded gradient values, cryptographic methods were then added to the original federated learning scheme for protecting gradients. B. Keith @cite_15 designed a universal and practical model aggregation scheme for mobile devices with secret sharing technology. In @cite_24 , N. Richard utilized the homomorphic encryption to protect the uploaded gradients and designed an entity resolution and federated learning framework.
{ "cite_N": [ "@cite_22", "@cite_0", "@cite_24", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2541884796", "2963333146", "2793216106", "2767079719", "2793925626", "2920095265" ], "abstract": [ "", "In this paper, a novel joint transmit power and resource allocation approach for enabling ultra-reliable low-latency communication (URLLC) in vehicular networks is proposed. The objective is to minimize the network-wide power consumption of vehicular users (VUEs) while ensuring high reliability in terms of probabilistic queuing delays. In particular, a reliability measure is defined to characterize extreme events (i.e., when vehicles' queue lengths exceed a predefined threshold with non-negligible probability) using extreme value theory (EVT). Leveraging principles from federated learning (FL), the distribution of these extreme events corresponding to the tail distribution of queues is estimated by VUEs in a decentralized manner. Finally, Lyapunov optimization is used to find the joint transmit power and resource allocation policies for each VUE in a distributed manner. The proposed solution is validated via extensive simulations using a Manhattan mobility model. It is shown that FL enables the proposed distributed method to estimate the tail distribution of queues with an accuracy that is very close to a centralized solution with up to 79 reductions in the amount of data that need to be exchanged. Furthermore, the proposed method yields up to 60 reductions of VUEs with large queue lengths, without an additional power consumption, compared to an average queue-based baseline. Compared to systems with fixed power consumption and focusing on queue stability while minimizing average power consumption, the reductions in extreme events of the proposed method is about two orders of magnitude.", "Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a crucial upstream issue: entity resolution, i.e. finding the correspondence between the rows of the datasets. It is well known that entity resolution, just like learning, is mistake-prone in the real world. Despite the importance of the problem, there has been no formal assessment of how errors in entity resolution impact learning. In this paper, we provide a thorough answer to this question, answering how optimal classifiers, empirical losses, margins and generalisation abilities are affected. While our answer spans a wide set of losses --- going beyond proper, convex, or classification calibrated ---, it brings simple practical arguments to upgrade entity resolution as a preprocessing step to learning. One of these suggests that entity resolution should be aimed at controlling or minimizing the number of matching errors between examples of distinct classes. In our experiments, we modify a simple token-based entity resolution algorithm so that it indeed aims at avoiding matching rows belonging to different classes, and perform experiments in the setting where entity resolution relies on noisy data, which is very relevant to real world domains. Notably, our approach covers the case where one peer have classes, or a noisy record of classes. Experiments display that using the class information during entity resolution can buy significant uplift for learning at little expense from the complexity standpoint.", "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.", "Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.", "There is an increasing interest in a new machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), and each UE contributes to the learning model by independently computing the gradient based on its local training data. Federated Learning has several benefits of data privacy and potentially a large amount of UE participants with modern powerful processors and low-delay mobile-edge networks. While most of the existing work focused on designing learning algorithms with provable convergence time, other issues such as uncertainty of wireless channels and UEs with heterogeneous power constraints and local data size, are under-explored. These issues especially affect to various trade-offs: (i) between computation and communication latencies determined by learning accuracy level, and thus (ii) between the Federated Learning time and UE energy consumption. We fill this gap by formulating a Federated Learning over wireless network as an optimization problem FEDL that captures both trade-offs. Even though FEDL is non-convex, we exploit the problem structure to decompose and transform it to three convex sub-problems. We also obtain the globally optimal solution by charactering the closed-form solutions to all sub-problems, which give qualitative insights to problem design via the obtained optimal FEDL learning time, accuracy level, and UE energy cost. Our theoretical analysis is also illustrated by extensive numerical results." ] }
1907.10274
2964328619
Photorealistic style transfer aims to transfer the style of a reference photo onto a content photo naturally, such that the stylized image looks like a real photo taken by a camera. Existing state-of-the-art methods are prone to spatial structure distortion of the content image and global color inconsistency across different semantic objects, making the results less photorealistic. In this paper, we propose a one-shot mutual Dirichlet network, to address these challenging issues. The essential contribution of the work is the realization of a representation scheme that successfully decouples the spatial structure and color information of images, such that the spatial structure can be well preserved during stylization. This representation is discriminative and context-sensitive with respect to semantic objects. It is extracted with a shared sparse Dirichlet encoder. Moreover, such representation is encouraged to be matched between the content and style images for faithful color transfer. The affine-transfer model is embedded in the decoder of the network to facilitate the color transfer. The strong representative and discriminative power of the proposed network enables one-shot learning given only one content-style image pair. Experimental results demonstrate that the proposed method is able to generate photorealistic photos without spatial distortion or abrupt color changes.
Classical style transfer methods stylize an image in a global fashion with spatial-invariant transfer functions @cite_41 @cite_7 @cite_3 @cite_2 @cite_43 @cite_23 . These methods can handle global color shifts, but they are limited in matching sophisticated styles with drastic color changes @cite_36 @cite_17 , as shown in Fig. .
{ "cite_N": [ "@cite_7", "@cite_41", "@cite_36", "@cite_3", "@cite_43", "@cite_23", "@cite_2", "@cite_17" ], "mid": [ "2141015396", "2129112648", "2604721644", "2147821504", "", "", "2006957355", "2963683323" ], "abstract": [ "This article proposes an original method to estimate a continuous transformation that maps one N-dimensional distribution to another. The method is iterative, non-linear, and is shown to converge. Only 1D marginal distribution is used in the estimation process, hence involving low computation costs. As an illustration this mapping is applied to color transfer between two images of different contents. The paper also serves as a central focal point for collecting together the research activity in this area and relating it to the important problem of automated color grading", "We use a simple statistical analysis to impose one image's color characteristics on another. We can achieve color correction by choosing an appropriate source image and apply its characteristic to another image.", "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "We introduce a new approach to tone management for photographs. Whereas traditional tone-mapping operators target a neutral and faithful rendition of the input image, we explore pictorial looks by controlling visual qualities such as the tonal balance and the amount of detail. Our method is based on a two-scale non-linear decomposition of an image. We modify the different layers based on their histograms and introduce a technique that controls the spatial variation of detail. We introduce a Poisson correction that prevents potential gradient reversal and preserves detail. In addition to directly controlling the parameters, the user can transfer the look of a model photograph to the picture being edited.", "", "", "This article proposes an original method for grading the colours between different images or shots. The first stage of the method is to find a one-to-one colour mapping that transfers the palette of an example target picture to the original picture. This is performed using an original and parameter free algorithm that is able to transform any N-dimensional probability density function into another one. The proposed algorithm is iterative, non-linear and has a low computational cost. Applying the colour mapping on the original picture allows reproducing the same 'feel' as the target picture, but can also increase the graininess of the original picture, especially if the colour dynamic of the two pictures is very different. The second stage of the method is to reduce this grain artefact through an efficient post-processing algorithm that intends to preserve the gradient field of the original picture.", "Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. While several photorealistic image stylization methods exist, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In this paper, we propose a method to address these issues. The proposed method consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step ensures spatially consistent stylizations. Each of the steps has a closed-form solution and can be computed efficiently. We conduct extensive experimental validations. The results show that the proposed method generates photorealistic stylization outputs that are more preferred by human subjects as compared to those by the competing methods while running much faster. Source code and additional results are available at https: github.com NVIDIA FastPhotoStyle." ] }
1907.10274
2964328619
Photorealistic style transfer aims to transfer the style of a reference photo onto a content photo naturally, such that the stylized image looks like a real photo taken by a camera. Existing state-of-the-art methods are prone to spatial structure distortion of the content image and global color inconsistency across different semantic objects, making the results less photorealistic. In this paper, we propose a one-shot mutual Dirichlet network, to address these challenging issues. The essential contribution of the work is the realization of a representation scheme that successfully decouples the spatial structure and color information of images, such that the spatial structure can be well preserved during stylization. This representation is discriminative and context-sensitive with respect to semantic objects. It is extracted with a shared sparse Dirichlet encoder. Moreover, such representation is encouraged to be matched between the content and style images for faithful color transfer. The affine-transfer model is embedded in the decoder of the network to facilitate the color transfer. The strong representative and discriminative power of the proposed network enables one-shot learning given only one content-style image pair. Experimental results demonstrate that the proposed method is able to generate photorealistic photos without spatial distortion or abrupt color changes.
The quality of image stylization can be improved by densely matching the low-level or high-level features between the content and style images @cite_1 @cite_29 @cite_33 @cite_34 . Gatys al @cite_34 demonstrated impressive art style transfer results with pretrained CNN, which matches the correlations of deep features extracted from CNN based on Gram matrix. Since then, numerous approaches had been developed to further improve the stylization performance as well as efficiency @cite_31 @cite_0 @cite_37 @cite_40 @cite_5 @cite_12 . For example, feed-forward approaches @cite_22 @cite_10 improved the stylization speed by training a decoder network with different loss functions. In order to transform arbitrary styles to content images, Li al @cite_35 adopted the classical signal whitening and coloring transforms (WCTs) on features that extracted from CNN. These methods could generate promising images with different art styles. However, the spatial structures of the content image could not be preserved well even the given style image is a real photo, as shown in Fig. .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_33", "@cite_22", "@cite_29", "@cite_1", "@cite_0", "@cite_40", "@cite_5", "@cite_31", "@cite_34", "@cite_10", "@cite_12" ], "mid": [ "2962772087", "2951924128", "", "2331128040", "2019969451", "2106395586", "", "", "", "2564755245", "2475287302", "2295130376", "2886566843" ], "abstract": [ "Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring.", "We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of \"image analogy\" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.", "", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "We introduce \"time hallucination\": synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as \"night\". Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime.", "Headshot portraits are a popular subject in photography but to achieve a compelling visual style requires advanced skills that a casual photographer will not have. Further, algorithms that automate or assist the stylization of generic photographs do not perform well on headshots due to the feature-specific, local retouching that a professional photographer typically applies to generate such portraits. We introduce a technique to transfer the style of an example headshot photo onto a new one. This can allow one to easily reproduce the look of renowned artists. At the core of our approach is a new multiscale technique to robustly transfer the local statistics of an example portrait onto a new one. This technique matches properties such as the local contrast and the overall lighting direction while being tolerant to the unavoidable differences between the faces of two different people. Additionally, because artists sometimes produce entire headshot collections in a common style, we show how to automatically find a good example to use as a reference for a given portrait, enabling style transfer without the user having to search for a suitable example for each input. We demonstrate our approach on data taken in a controlled environment as well as on a large set of photos downloaded from the Internet. We show that we can successfully handle styles by a variety of different artists.", "", "", "", "Artistic style transfer is an image synthesis problem where the content of an image is reproduced with the style of another. Recent works show that a visually appealing style transfer can be achieved by using the hidden activations of a pretrained convolutional neural network. However, existing methods either apply (i) an optimization procedure that works for any style image but is very expensive, or (ii) an efficient feedforward network that only allows a limited number of trained styles. In this work we propose a simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network. We show that our objective has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video. Furthermore, we use 80,000 natural images and 80,000 paintings to train an inverse network that approximates the result of the optimization. This results in a procedure for artistic style transfer that is efficient but also allows arbitrary content and style images.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to , but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.", "Given a random pair of images, an arbitrary style transfer method extracts the feel from the reference image to synthesize an output based on the look of the other content image. Recent arbitrary style transfer methods transfer second order statistics from reference image onto content image via a multiplication between content image features and a transformation matrix, which is computed from features with a pre-determined algorithm. These algorithms either require computationally expensive operations, or fail to model the feature covariance and produce artifacts in synthesized images. Generalized from these methods, in this work, we derive the form of transformation matrix theoretically and present an arbitrary style transfer approach that learns the transformation matrix with a feed-forward network. Our algorithm is highly efficient yet allows a flexible combination of multi-level styles while preserving content affinity during style transfer process. We demonstrate the effectiveness of our approach on four tasks: artistic style transfer, video and photo-realistic style transfer as well as domain adaptation, including comparisons with the state-of-the-art methods." ] }
1907.10274
2964328619
Photorealistic style transfer aims to transfer the style of a reference photo onto a content photo naturally, such that the stylized image looks like a real photo taken by a camera. Existing state-of-the-art methods are prone to spatial structure distortion of the content image and global color inconsistency across different semantic objects, making the results less photorealistic. In this paper, we propose a one-shot mutual Dirichlet network, to address these challenging issues. The essential contribution of the work is the realization of a representation scheme that successfully decouples the spatial structure and color information of images, such that the spatial structure can be well preserved during stylization. This representation is discriminative and context-sensitive with respect to semantic objects. It is extracted with a shared sparse Dirichlet encoder. Moreover, such representation is encouraged to be matched between the content and style images for faithful color transfer. The affine-transfer model is embedded in the decoder of the network to facilitate the color transfer. The strong representative and discriminative power of the proposed network enables one-shot learning given only one content-style image pair. Experimental results demonstrate that the proposed method is able to generate photorealistic photos without spatial distortion or abrupt color changes.
Recently, there have been a few methods specifically designed for photorealistic image stylization @cite_18 @cite_25 . Luan al @cite_36 preserved the structure of the content image by adopting a color-affine-transfer constraint and color transfer is performed according to the semantic region. However, the generated results easily suffer abrupt color changes with noticeable artifacts especially between adjacent regions segments. Mechrez al @cite_25 proposed to maintain the fidelity of the stylized image with a post-processing step based on the screened poisson equation (SPE). Li al @cite_17 improved the spatial consistency of the output image by adopting the manifold ranking algorithm as the post-processing step. He al @cite_18 optimized the dense semantic correspondence in deep feature domain, resulting in the smooth local color transfer in the image domain. Although these methods preserve the spatial structure well, the light and color changes of different parts and materials are not smooth. See Fig. for a comparison. Aside from image quality, these methods need to train a network with large amount of parameters on large dataset.
{ "cite_N": [ "@cite_36", "@cite_18", "@cite_25", "@cite_17" ], "mid": [ "2604721644", "2904949811", "2756646128", "2963683323" ], "abstract": [ "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "We propose a new algorithm for color transfer between images that have perceptually similar semantic structures. We aim to achieve a more accurate color transfer that leverages semantically meaningful dense correspondence between images. To accomplish this, our algorithm uses neural representations for matching. Additionally, the color transfer should be spatially variant and globally coherent. Therefore, our algorithm optimizes a local linear model for color transfer satisfying both local and global constraints. Our proposed approach jointly optimizes matching and color transfer, adopting a coarse-to-fine strategy. The proposed method can be successfully extended from one-to-one to one-to-many color transfer. The latter further addresses the problem of mismatching elements of the input image. We validate our proposed method by testing it on a large variety of image content.", "Recent work has shown impressive success in transferring painterly style to images. These approaches, however, fall short of photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. In this paper we propose an approach that takes as input a stylized image and makes it more photorealistic. It relies on the Screened Poisson Equation, maintaining the fidelity of the stylized image while constraining the gradients to those of the original input image. Our method is fast, simple, fully automatic and shows positive progress in making a stylized image photorealistic. Our results exhibit finer details and are less prone to artifacts than the state-of-the-art.", "Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. While several photorealistic image stylization methods exist, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In this paper, we propose a method to address these issues. The proposed method consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step ensures spatially consistent stylizations. Each of the steps has a closed-form solution and can be computed efficiently. We conduct extensive experimental validations. The results show that the proposed method generates photorealistic stylization outputs that are more preferred by human subjects as compared to those by the competing methods while running much faster. Source code and additional results are available at https: github.com NVIDIA FastPhotoStyle." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
In the literature of CDR, early works @cite_31 @cite_3 @cite_14 @cite_6 @cite_5 mainly adopt matrix factorization models. In particular, @cite_3 constructs a cluster-level rating matrix (code-book) from user-item rating patterns and through which it establishes links to transfer the knowledge across domains. A similar approach with an extension to soft-membership was proposed in @cite_14 . Collective matrix factorization (CMF) @cite_31 was proposed for the case where the entities participate in more than one relation. However, as many studies pointed out, MF models may not handle non-linearity and complex relationships present in the system @cite_42 @cite_40 @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_42", "@cite_6", "@cite_3", "@cite_40", "@cite_5", "@cite_31" ], "mid": [ "2605350416", "2118338035", "2739273093", "2129679514", "143867266", "1956916606", "", "2117420919" ], "abstract": [ "In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation --- collaborative filtering --- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering --- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.", "Cross-domain collaborative filtering solves the sparsity problem by transferring rating knowledge across multiple domains. In this paper, we propose a rating-matrix generative model (RMGM) for effective cross-domain collaborative filtering. We first show that the relatedness across multiple rating matrices can be established by finding a shared implicit cluster-level rating matrix, which is next extended to a cluster-level rating model. Consequently, a rating matrix of any related task can be viewed as drawing a set of users and items from a user-item joint mixture model as well as drawing the corresponding ratings from the cluster-level rating model. The combination of these two models gives the RMGM, which can be used to fill the missing ratings for both existing and new users. A major advantage of RMGM is that it can share the knowledge by pooling the rating data from multiple tasks even when the users and items of these tasks do not overlap. We evaluate the RMGM empirically on three real-world collaborative filtering data sets to show that RMGM can outperform the individual models trained separately.", "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.", "Recommender systems always aim to provide recommendations for a user based on historical ratings collected from a single domain (e.g., movies or books) only, which may suffer from the data sparsity problem. Recently, several recommendation models have been proposed to transfer knowledge by pooling together the rating data from multiple domains to alleviate the sparsity problem, which typically assume that multiple domains share a latent common rating pattern based on the user-item co-clustering. In practice, however, the related domains do not necessarily share such a common rating pattern, and diversity among the related domains might outweigh the advantages of such common pattern, which may result in performance degradations. In this paper, we propose a novel cluster-level based latent factor model to enhance the cross-domain recommendation, which can not only learn the common rating pattern shared across domains with the flexibility in controlling the optimal level of sharing, but also learn the domain-specific rating patterns of users in each domain that involve the discriminative information propitious to performance improvement. To this end, the proposed model is formulated as an optimization problem based on joint nonnegative matrix tri-factorization and an efficient alternating minimization algorithm is developed with convergence guarantee. Extensive experiments on several real world datasets suggest that our proposed model outperforms the state-of-the-art methods for the cross-domain recommendation task.", "The sparsity problem in collaborative filtering (CF) is a major bottleneck for most CF methods. In this paper, we consider a novel approach for alleviating the sparsity problem in CF by transferring useritem rating patterns from a dense auxiliary rating matrix in other domains (e.g., a popular movie rating website) to a sparse rating matrix in a target domain (e.g., a new book rating website). We do not require that the users and items in the two domains be identical or even overlap. Based on the limited ratings in the target matrix, we establish a bridge between the two rating matrices at a cluster-level of user-item rating patterns in order to transfer more useful knowledge from the auxiliary task domain. We first compress the ratings in the auxiliary rating matrix into an informative and yet compact cluster-level rating pattern representation referred to as a codebook. Then, we propose an efficient algorithm for reconstructing the target rating matrix by expanding the codebook. We perform extensive empirical tests to show that our method is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary tasks, as compared to many state-of-the-art CF methods.", "The Cross Domain Collaborative Filtering (CDCF) exploits the rating matrices from multiple domains to make better recommendations. Existing CDCF methods adopt the substructure sharing technique that can only transfer linearly correlated knowledge between domains. In this paper, we propose the notion of Hyper-Structure Transfer (HST) that requires the rating matrices to be explained by the projections of some more complex structure, called the hyper-structure, shared by all domains, and thus allows the nonlinearly correlated knowledge between domains to be identified and transferred. Extensive experiments are conducted and the results demonstrate the effectiveness of our HST models empirically.", "", "Relational learning is concerned with predicting unknown values of a relation, given a database of entities and observed relations among entities. An example of relational learning is movie rating prediction, where entities could include users, movies, genres, and actors. Relations encode users' ratings of movies, movies' genres, and actors' roles in movies. A common prediction technique given one pairwise relation, for example a #users x #movies ratings matrix, is low-rank matrix factorization. In domains with multiple relations, represented as multiple matrices, we may improve predictive accuracy by exploiting information from one relation while predicting another. To this end, we propose a collective matrix factorization model: we simultaneously factor several matrices, sharing parameters among factors when an entity participates in multiple relations. Each relation can have a different value type and error distribution; so, we allow nonlinear relationships between the parameters and outputs, using Bregman divergences to measure error. We extend standard alternating projection algorithms to our model, and derive an efficient Newton update for the projection. Furthermore, we propose stochastic optimization methods to deal with large, sparse matrices. Our model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems. Our model can handle any pairwise relational schema and a wide variety of error models. We demonstrate its efficiency, as well as the benefit of sharing parameters among relations." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
On the other hand, recently, there has been a surge in methods proposed to explore deep learning networks for recommender systems @cite_42 . Most of the models in this category focus on utilizing neural network models for extracting embeddings from side information such as reviews @cite_36 , descriptions @cite_10 , content information @cite_27 , images @cite_24 and knowledge graphs @cite_32 . Nevertheless, many of these models are traces to matrix factorization models, that is, in the absence of side information, these models distill to either MF @cite_25 or PMF @cite_29 .
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_42", "@cite_32", "@cite_24", "@cite_27", "@cite_10", "@cite_25" ], "mid": [ "2575006718", "2137245235", "2739273093", "2509893387", "2963655167", "", "2515144511", "2054141820" ], "abstract": [ "A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets.", "Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7 better than the score of Netflix's own system.", "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.", "Among different recommendation techniques, collaborative filtering usually suffer from limited performance due to the sparsity of user-item interactions. To address the issues, auxiliary information is usually used to boost the performance. Due to the rapid collection of information on the web, the knowledge base provides heterogeneous information including both structured and unstructured data with different semantics, which can be consumed by various applications. In this paper, we investigate how to leverage the heterogeneous information in a knowledge base to improve the quality of recommender systems. First, by exploiting the knowledge base, we design three components to extract items' semantic representations from structural content, textual content and visual content, respectively. To be specific, we adopt a heterogeneous network embedding method, termed as TransR, to extract items' structural representations by considering the heterogeneity of both nodes and relationships. We apply stacked denoising auto-encoders and stacked convolutional auto-encoders, which are two types of deep learning based embedding techniques, to extract items' textual representations and visual representations, respectively. Finally, we propose our final integrated framework, which is termed as Collaborative Knowledge Base Embedding (CKE), to jointly learn the latent representations in collaborative filtering as well as items' semantic representations from the knowledge base. To evaluate the performance of each embedding component as well as the whole system, we conduct extensive experiments with two real-world datasets from different scenarios. The results reveal that our approaches outperform several widely adopted state-of-the-art recommendation methods.", "Modern recommender systems model people and items by discovering or 'teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.", "", "Sparseness of user-to-item rating data is one of the major factors that deteriorate the quality of recommender system. To handle the sparsity problem, several recommendation techniques have been proposed that additionally consider auxiliary information to improve rating prediction accuracy. In particular, when rating data is sparse, document modeling-based approaches have improved the accuracy by additionally utilizing textual data such as reviews, abstracts, or synopses. However, due to the inherent limitation of the bag-of-words model, they have difficulties in effectively utilizing contextual information of the documents, which leads to shallow understanding of the documents. This paper proposes a novel context-aware recommendation model, convolutional matrix factorization (ConvMF) that integrates convolutional neural network (CNN) into probabilistic matrix factorization (PMF). Consequently, ConvMF captures contextual information of documents and further enhances the rating prediction accuracy. Our extensive evaluations on three real-world datasets show that ConvMF significantly outperforms the state-of-the-art recommendation models even when the rating data is extremely sparse. We also demonstrate that ConvMF successfully captures subtle contextual difference of a word in a document. Our implementation and datasets are available at http: dm.postech.ac.kr ConvMF.", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
More recently, to combine the advantages of both matrix factorization models and deep networks such as multi-layer perceptron (MLP), some models have been proposed @cite_20 @cite_35 @cite_33 for learning representations from only ratings. These models combine both the wide and deep networks together to provide better representations. Autoencoder, stacked denoising autoencoder @cite_19 @cite_21 @cite_38 @cite_7 , Restricted Boltzmann machines @cite_8 and recurrent neural networks have also been exploited for recommendation systems. However, the above neural network models use only the interaction between users and items from a single domain. Hence, they suffer from the aforementioned issues such as sparsity and cold-start.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_33", "@cite_7", "@cite_8", "@cite_21", "@cite_19", "@cite_20" ], "mid": [ "2605350416", "2253995343", "2740920897", "2725606191", "2099866409", "2615395371", "1720514416", "2475334473" ], "abstract": [ "In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation --- collaborative filtering --- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering --- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.", "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "Recommender systems usually make personalized recommendation with user-item interaction ratings, implicit feedback and auxiliary information. Matrix factorization is the basic idea to predict a personalized ranking over a set of items for an individual user with the similarities among users and items. In this paper, we propose a novel matrix factorization model with neural network architecture. Firstly, we construct a user-item matrix with explicit ratings and non-preference implicit feedback. With this matrix as the input, we present a deep structure learning architecture to learn a common low dimensional space for the representations of users and items. Secondly, we design a new loss function based on binary cross entropy, in which we consider both explicit ratings and implicit feedback for a better optimization. The experimental results show the effectiveness of both our proposed model and the loss function. On several benchmark datasets, our model outperformed other state-of-the-art methods. We also conduct extensive experiments to evaluate the performance within different experimental settings.", "Modern recommender systems usually employ collaborative filtering with rating information to recommend items to users due to its successful performance. However, because of the drawbacks of collaborative-based methods such as sparsity, cold start, etc., more attention has been drawn to hybrid methods that consider both the rating and content information. Most of the previous works in this area cannot learn a good representation from content for recommendation task or consider only text modality of the content, thus their methods are very limited in current multimedia scenario. This paper proposes a Bayesian generative model called collaborative variational autoencoder (CVAE) that considers both rating and content for recommendation in multimedia scenario. The model learns deep latent representations from content data in an unsupervised manner and also learns implicit relationships between items and users from both content and rating. Unlike previous works with denoising criteria, the proposed CVAE learns a latent distribution for content in latent space instead of observation space through an inference network and can be easily extended to other multimedia modalities other than text. Experiments show that CVAE is able to significantly outperform the state-of-the-art recommendation methods with more robust performance.", "Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system.", "Neural networks have not been widely studied in Collaborative Filtering. For instance, no paper using neural networks was published during the Net-flix Prize apart from 's work on Restricted Boltzmann Machine (RBM) [14]. While deep learning has tremendous success in image and speech recognition, sparse inputs received less attention and remains a challenging problem for neural networks. Nonetheless, sparse inputs are critical for collaborative filtering. In this paper, we introduce a neural network architecture which computes a non-linear matrix factorization from sparse rating inputs. We show experimentally on the movieLens and jester dataset that our method performs as well as the best collaborative filtering algorithms. We provide an implementation of the algorithm as a reusable plugin for Torch [4], a popular neural network framework.", "This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.", "Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
Though the use of multiple related domains and neural networks for recommendations has been studied and justified in many works @cite_42 , very few attempts have been made to make use of neural networks in cross-domain recommendation setting @cite_23 @cite_0 @cite_13 @cite_43 . In particular, MV-DNN @cite_23 uses an MLP to learn shared representations of the entities participating in multiple domains. A factorization based multi-view neural network was proposed in CCCFNet @cite_0 , where the representations learned from multiple domains are coupled with the representations learned from content information. A two-stage approach was followed in @cite_13 @cite_43 , wherein the first stage, embeddings are learned for users, and in the second stage, a function is learned to map the user embedding in the target domain from the source domain.
{ "cite_N": [ "@cite_42", "@cite_0", "@cite_43", "@cite_23", "@cite_13" ], "mid": [ "2739273093", "2612388534", "2808716093", "2114079787", "2740605635" ], "abstract": [ "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.", "To overcome data sparsity problem, we propose a cross domain recommendation system named CCCFNet which can combine collaborative filtering and content-based filtering in a unified framework. We first introduce a factorization framework to tie CF and content-based filtering together. Then we find that the MAP estimation of this framework can be embedded into a multi-view neural network. Through this neural network embedding the framework can be further extended by advanced deep learning techniques.", "", "Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49 enhancement on existing users and 115 enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain.", "Data sparsity is one of the most challenging problems for recommender systems. One promising solution to this problem is cross-domain recommendation, i.e., leveraging feedbacks or ratings from multiple domains to improve recommendation performance in a collective manner. In this paper, we propose an Embedding and Mapping framework for Cross-Domain Recommendation, called EMCDR. The proposed EMCDR framework distinguishes itself from existing crossdomain recommendation models in two aspects. First, a multi-layer perceptron is used to capture the nonlinear mapping function across domains, which offers high flexibility for learning domain-specific features of entities in each domain. Second, only the entities with sufficient data are used to learn the mapping function, guaranteeing its robustness to noise caused by data sparsity in single domain. Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms state-of-the-art cross-domain recommendation methods." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
While the models @cite_23 @cite_0 @cite_13 @cite_43 consider learning embeddings together, they completely ignore the domain-specific representations for the shared users or items. The performance of these models @cite_23 @cite_0 is heavily dependent on the relatedness of the domains. In contrast, our proposed model learns domain-specific representations that significantly improves the prediction performance. Further, @cite_0 rely on content information to bridge the source and target domains. Besides, all of these models @cite_23 @cite_0 @cite_13 are either based on wide or deep networks but not both. We are also aware of the models proposed for cross-domain settings @cite_30 @cite_28 @cite_2 @cite_0 . However, they differ from the research scope of ours because they bridge the source and target domains using available side information.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_0", "@cite_43", "@cite_23", "@cite_2", "@cite_13" ], "mid": [ "2791324866", "2280921826", "2612388534", "2808716093", "2114079787", "2808396937", "2740605635" ], "abstract": [ "The behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used. However, the cross-domain relationships between items and user consumption patterns are not simple, especially when there are few or no common users and items across domains. To address this problem, we propose a content-based cross-domain recommendation method for cold-start users that does not require user- and item- overlap. We formulate recommendation as extreme multi-class classification where labels (items) corresponding to the users are predicted. With this formulation, the problem is reduced to a domain adaptation setting, in which a classifier trained in the source domain is adapted to the target domain. For this, we construct a neural network that combines an architecture for domain adaptation, Domain Separation Network, with a denoising autoencoder for item representation. We assess the performance of our approach in experiments on a pair of data sets collected from movie and news services of Yahoo! JAPAN and show that our approach outperforms several baseline methods including a cross-domain collaborative filtering method.", "Most existing cross-domain recommendation algorithms focus on modeling ratings, while ignoring review texts. The review text, however, contains rich information, which can be utilized to alleviate data sparsity limitations, and interpret transfer patterns. In this paper, we investigate how to utilize the review text to improve cross-domain collaborative filtering models. The challenge lies in the existence of non-linear properties in some transfer patterns. Given this, we extend previous transfer learning models in collaborative filtering, from linear mapping functions to non-linear ones, and propose a cross-domain recommendation framework with the review text incorporated. Experimental verifications have demonstrated, for new users with sparse feedback, utilizing the review text obtains 10 improvement in the AUC metric, and the nonlinear method outperforms the linear ones by 4 .", "To overcome data sparsity problem, we propose a cross domain recommendation system named CCCFNet which can combine collaborative filtering and content-based filtering in a unified framework. We first introduce a factorization framework to tie CF and content-based filtering together. Then we find that the MAP estimation of this framework can be embedded into a multi-view neural network. Through this neural network embedding the framework can be further extended by advanced deep learning techniques.", "", "Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49 enhancement on existing users and 115 enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain.", "", "Data sparsity is one of the most challenging problems for recommender systems. One promising solution to this problem is cross-domain recommendation, i.e., leveraging feedbacks or ratings from multiple domains to improve recommendation performance in a collective manner. In this paper, we propose an Embedding and Mapping framework for Cross-Domain Recommendation, called EMCDR. The proposed EMCDR framework distinguishes itself from existing crossdomain recommendation models in two aspects. First, a multi-layer perceptron is used to capture the nonlinear mapping function across domains, which offers high flexibility for learning domain-specific features of entities in each domain. Second, only the entities with sufficient data are used to learn the mapping function, guaranteeing its robustness to noise caused by data sparsity in single domain. Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms state-of-the-art cross-domain recommendation methods." ] }
1907.08661
2963611731
Searching sounds by text labels is often difficult, as text descriptions cannot describe the audio content in detail. Query by vocal imitation bridges such gap and provides a novel way to sound search. Several algorithms for sound search by vocal imitation have been proposed and evaluated in a simulation environment, however, they have not been deployed into a real search engine nor evaluated by real users. This pilot work conducts a subjective study to compare these two approaches to sound search, and tries to answer the question of which approach works better for what kinds of sounds. To do so, we developed two web-based search engines for sound, one by vocal imitation (Vroom!) and the other by text description (TextSearch). We also developed an experimental framework to host these engines to collect statistics of user behaviors and ratings. Results showed that Vroom! received significantly higher search satisfaction ratings than TextSearch did for sound categories that were difficult for subjects to describe by text. Results also showed a better overall ease-of-use rating for Vroom! than TextSearch on the limited sound library in our experiments. These findings suggest advantages of vocal-imitation-based search for sound in practice.
In our previous work @cite_13 , we first proposed a supervised system using a Stacked Auto-Encoder (SAE) for automatic feature learning followed by an SVM for imitation classification. We then proposed an unsupervised system called IMISOUND @cite_27 to overcome the close-set limitation in @cite_13 . The SAE was adopted for feature extraction for both imitation queries and sound candidates and various similarity measures were calculated @cite_7 @cite_11 @cite_6 . Due to the separation of feature representation and metric learning, we further proposed the end-to-end Siamese style convolutional neural networks @cite_17 to integrate these two modules together, in which the transfer learning based TL-IMINET is our most updated model @cite_15 . Meanwhile, the benefits of applying positive and negative imitations to update the cosine similarity between the query and sound candidate embedding was investigated in @cite_26 . To understand what such neural networks actually learns, we also visualized and sonified the input patterns in TL-IMINET @cite_2 using activation maximization @cite_24 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_17", "@cite_6", "@cite_24", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2940168379", "1965555277", "2775430159", "", "", "2406791552", "2889726676", "2890913619", "1988566301", "" ], "abstract": [ "Content-based audio retrieval including query-by-example (QBE) and query-by-vocal imitation (QBV) is useful when search-relevant text labels for the audio are unavailable, or text labels do not sufficiently narrow the search. However, a single query example may not provide sufficient information to ensure the target sound(s) in the database are the most highly ranked. In this paper, we adapt an existing model for generating audio embeddings to create a state-of-the-art similarity measure for audio QBE and QBV. We then propose a new method to update search results when top-ranked items are not relevant: The user provides an additional vocal imitation to illustrate what they do or do not want in the search results. This imitation may either be of some portion of the initial query example, or of a top-ranked (but incorrect) search result. Results show that adding vocal imitation feedback improves initial retrieval results by a statistically significant amount.", "", "Searching sounds by text labels is often difficult, as text labels cannot always provide sufficient information for the sound content. Previously we proposed an unsupervised system called IMISOUND for sound search by vocal imitation. In this paper, we further propose a Convolutional Semi-Siamese Network (CSN) called IMINET. IMINET uses two towers of Convolutional Neural Networks (CNN) to extract features from vocal imitations and sound recordings, respectively. It then adopts a fully connected network to predict the similarity between vocal imitations and sound recordings. We propose three different configurations of the CSN by choosing different weight sharing strategies between the two towers. We also propose late fusion of the retrieval results of IMINET's different configurations and those of IMISOUND as a baseline. Experiments show significant improvements of the retrieval performance from the IMISOUND baseline to the fusion of IMINET's different configurations, and to different fusions between IMINET and the IMISOUND baseline.", "", "", "Vocal imitation is widely used in human interactions. In this paper, we propose a novel human-computer interaction system called IMISOUND that listens to a vocal imitation and retrieves similar sounds from a sound library. This system allows users to search sounds even if they do not remember their semantic labels or the sounds do not have these labels (e.g., synthesized sound effects). IMISOUND employs a Stacked Auto-Encoder (SAE) to extract features from both the vocal imitation (query) and sounds in the library (candidates). The SAE is pre-trained using training vocal imitations of sounds not in the library to automatically learn more suitable feature representations than human-engineered features such as MFCC's. It then measures the similarity between the query and each sound candidate, using the K-L divergence and Dynamic Time Warping distance between their feature representations, and finally retrieves the closest sounds. IMISOUND is an unsupervised system in the sense that no training is performed for the target sound, nonetheless, experiments show that it achieves comparable performance to a previously proposed supervised system which requires pre-training on sounds to be retrieved. Experiments also show that IMISOUND significantly outperforms an unsupervised MFCC-based baseline system, validating the advantage of the SAE feature representation.", "Designing systems that allow users to search sounds through vocal imitation augments the current text-based search engines and advances human-computer interaction. Previously we proposed a Siamese style convolutional network called IMINET for sound search by vocal imitation, which jointly addresses feature extraction by Convolutional Neural Network (CNN) and similarity calculation by Fully Connected Network (FCN), and is currently the state of the art. However, how such architecture works is still a mystery. In this paper, we try to answer this question. First, we visualize the input patterns that maximize the activation of different neurons in each CNN tower; this helps us understand what features are extracted from vocal imitations and sound candidates. Second, we visualize the imitation-sound input pairs that maximize the activation of different neurons in the FCN layers; this helps us understand what kind of input pattern pairs are recognized during the similarity calculation. Interesting patterns are found to reveal the local-to-global and simple-to-conceptual learning mechanism of TL-IMINET. Experiments also show how transfer learning helps to improve TL-IMINET performance from the visualization aspect.", "Conventional methods for finding audio in databases typically search text labels, rather than the audio itself. This can be problematic as labels may be missing, irrelevant to the audio content, or not known by users. Query by vocal imitation lets users query using vocal imitations instead. To do so, appropriate audio feature representations and effective similarity measures of imitations and original sounds must be developed. In this paper, we build upon our preliminary work to propose Siamese style convolutional neural networks to learn feature representations and similarity measures in a unified end-to-end training framework. Our Siamese architecture uses two convolutional neural networks to extract features, one from vocal imitations and the other from original sounds. The encoded features are then concatenated and fed into a fully connected network to estimate their similarity. We propose two versions of the system: IMINET is symmetric where the two encoders have an identical structure and are trained from scratch, while TL-IMINET is asymmetric and adopts the transfer learning idea by pretraining the two encoders from other relevant tasks: spoken language recognition for the imitation encoder and environmental sound classification for the original sound encoder. Experimental results show that both versions of the proposed system outperform a state-of-the-art system for sound search by vocal imitation, and the performance can be further improved when they are fused with the state of the art system. Results also show that transfer learning significantly improves the retrieval performance. This paper also provides insights to the proposed networks by visualizing and sonifying input patterns that maximize the activation of certain neurons in different layers.", "Vocal imitation is widely used in human communication. In this paper, we propose an approach to automatically recognize the concept of a vocal imitation, and then retrieve sounds of this concept. Because different acoustic aspects (e.g., pitch, loudness, timbre) are emphasized in imitating different sounds, a key challenge in vocal imitation recognition is to extract appropriate features. Hand-crafted features may not work well for a large variety of imitations. Instead, we use a stacked auto-encoder to automatically learn features from a set of vocal imitations in an unsupervised way. Then, a multi-class SVM is trained for sound concepts of interest using their training imitations. Given a new vocal imitation of a sound concept of interest, our system can recognize its underlying concept and return it with a high rank among all concepts. Experiments show that our system significantly outperforms an MFCC-based comparison system in both classification and retrieval.", "" ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
In the scientific domain, several approaches automate or simplify light-source placement and orientation---either with procedural methods as suggested by Schwarz and Wonka @cite_11 , or by painting'' the parts of a scene for illumination @cite_3 @cite_15 @cite_8 @cite_23 @cite_28 @cite_41 . While these methods deliver solutions to certain aspects, they ignore the iterative, interactive workflow of lighting designers, in which a large variety of considerations (that may not all be quantifiable) play an important role. Other approaches focus on interactivity, and try to decrease the feedback cycles between modeling and simulation. Both @cite_37 and Kr " @cite_21 rely on fast, GPU-based simulations. Despite being efficient, they do not offer guided modeling proposals or methods to explore and compare parallel modeling tracks.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_28", "@cite_41", "@cite_21", "@cite_3", "@cite_23", "@cite_15", "@cite_11" ], "mid": [ "2028497605", "1983440625", "1993525930", "2619323557", "2757993879", "2093238777", "2055893387", "2100097348", "2060174494" ], "abstract": [ "We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.", "An interactive and intuitive way of designing lighting around a model is desirable in many applications. In this paper, we present a tool for interactive inverse lighting in which a model is rendered based on sketched lighting effects. To specify target lighting, the user freely sketches bright and dark regions on the model as if coloring it with crayons. Using these hints and the geometry of the model, the system efficiently derives light positions, directions, intensities and spot angles, assuming a local point-light based illumination model. As the system also minimizes changes from the previous specifications, lighting can be designed incrementally. We formulate the inverse lighting problem as that of an optimization and solve it using a judicious mix of greedy and minimization methods. We also map expensive calculations of the optimization to graphics hardware to make the process fast and interactive. Our tool can be used to augment larger systems that use point-light based illumination models but lack intuitive interfaces for lighting design, and also in conjunction with applications like ray tracing where interactive lighting design is difficult to achieve.", "Lighting design plays a crucial role in indoor lighting design, computer cinematograph and many other applications. Computer-assisted lighting design aims to find a lighting configuration that best approximates the illumination effect specified by designers. In this paper, we present an automatic approach for lighting design, in which discrete and continuous optimization of the lighting configuration, including the number, intensity, and position of lights, are achieved. Our lighting design algorithm consists of two major steps. The first step estimates an initial lighting configuration by light sampling and clustering. The initial light clusters are then recursively merged to form a light hierarchy. The second step optimizes the lighting configuration by alternatively selecting a light cut on the light hierarchy to determine the number of representative lights and optimizing the lighting parameters using the simplex method. To speed up the optimization computation, only illumination at scene vertices that are important to rendering are sampled and taken into account in the optimization. Using the proposed approach, we develop a lighting design system that can compute appropriate lighting configurations to generate the illumination effects iteratively painted and modified by a designer interactively.", "Light painting is an artform where a light source is moved during a long-exposure shot, creating trails resembling a stroke on a canvas. It is very difficult to perform because the light source needs to be moved at the intended speed and along a precise trajectory. Additionally, images can be corrupted by the person moving the light. We propose computational light painting, which avoids such artifacts and is easy to use. Taking a video of the moving light as input, a virtual exposure allows us to draw the intended light positions in a post-process. We support animation, as well as 3D light sculpting, with high-quality results.", "", "We present a new approach to lighting design for image synthesis. It is based on the inverse problem of determining light settings for an environment from a description of the desired solution. The method is useful for determining light intensities to achieve a desired effect in a computer simulation and can be used in conjunction with any rendering algorithm. Given a set of lights with fixed positions, we determine the light intensities and colors that most closely match the target image painted by the designer using a constrained least squares approach. We describe an interactive system that allows flexible input and display of the solution.", "Lighting is a fundamental aspect of computer cinematography that involves the placement and configuration of lights to establish mood and enhance storytelling. This process is labor intensive as artists repeatedly adjust the parameters of a large set of complex lights to achieve a desired effect. Typical lighting controls affect the final image indirectly, requiring a large number of trials to obtain a suitable result. We present an interactive system wherein an artist paints desired lighting effects directly into the scene, and the computer solves for parameters that achieve the desired look. The artist can paint color, light shape, shadows, highlights, and reflections using a suite of tools designed for painting light. Our system matches these effects using a nonlinear optimizer made robust by a combination of initial estimates, system design, and user-guided optimization. In contrast, previous work on painting light has not permitted the lights to move, allowing for linear optimization but preventing its use in computer cinematography. To demonstrate our approach we lit several scenes, mainly using a direct illumination renderer designed for computer animation, but also including two other rendering styles. We show that painting interfaces can quickly produce high quality lighting setups, easing the lighting artist's workflow.", "We present a novel scheme for automatically generating line drawings from 2D images, aiming to facilitate effective visual communication. In contrast to conventional edge detectors, our technique imitates the human line drawing process and consists of two parts: line extraction and line rendering. We propose a novel line extraction method based on likelihood-function estimation, which effectively finds the genuine shape boundaries. We consider the feature scale and the blurriness of lines with which the detail and the focus-level of lines are controlled in the rendering. We also employ stroke textures to provide a variety of illustration styles. Experimental results demonstrate that our technique generates various kinds of line drawings from 2D images enabled by the control over detail, focus, and style.", "We present a system for the lighting design of procedurally modeled buildings. The design is procedurally specified as part of the ordinary modeling workflow by defining goals for the illumination that should be attained and locations where luminaires may be installed to realize these goals. Additionally, constraints can be modeled that make the arrangement of the installed luminaires respect certain aesthetic and structural considerations. From this specification, the system automatically generates a lighting solution for any concrete model instance. The underlying, intricate joint optimization and constraint satisfaction problem is approached with a stochastic scheme that operates directly in the complex subspace where all constraints are observed. To navigate this subspace efficaciously, the actual lighting situation is taken into account. We demonstrate our system on multiple examples spanning a variety of architectural structures and lighting designs." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
@cite_12 tackle the problem of comparing different light configurations by linking the simulation results and a spatial view with non-spatial ranking and comparison visualizations. Their idea of setting the importance of certain criteria to compute the overall score (i.e., giving more weight to certain illumination requirements, to certain scene objects, or to global factors like maintenance costs) during the decision process, has influenced our work. Nevertheless, their approach does not take the modeling process itself into account and presumes the availability of a high number of valid, pre-simulated lighting configurations. This assumption rarely holds in real-world scenarios (due to the trial-and-error-based methodology converging to a single valid solution), raising the need for novel methods that produce multiple solutions in parallel. Other solutions, such as proposed by @cite_34 , record light rays and offer visual-analytics tools to explore, evaluate, and compare light interactions, potentially involving several scenes. Nevertheless, they do not offer suggestions for scene manipulations to fulfill given constraints.
{ "cite_N": [ "@cite_34", "@cite_12" ], "mid": [ "2902079786", "1934088123" ], "abstract": [ "Physically based rendering is a well-understood technique to produce realistic-looking images. However, different algorithms exist for efficiency reasons, which work well in certain cases but fail or produce rendering artefacts in others. Few tools allow a user to gain insight into the algorithmic processes. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering. It consists of an interactive parallel coordinates plot, with a built-in sampling-based data reduction technique to visualize the attributes associated with each light sample. Twodimensional (2D) and three-dimensional (3D) heat maps depict any desired property of the rendering process. An interactively rendered 3D view of the scene displays animated light paths based on the user’s selection to gain further insight into the rendering process. The provided interactivity enables the user to guide the rendering process for more efficiency. To show its usefulness, we present several applications based on our tool. This includes differential light transport visualization to optimize light setup in a scene, finding the causes of and resolving rendering artefacts, such as fireflies, as well as a path length contribution histogram to evaluate the efficiency of different Monte Carlo estimators.", "State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
In accordance with @cite_32 , we classify LightGuider as follows: Utilizing an interactive lighting simulation, lighting designers start out with a single sample and generate new on-the-fly supported by guidance mechanisms (see sec:relworkguidance ) suggesting alternatives in the parameter space. Immediate feedback of the simulation results provides them with navigation. As lighting designers need to evaluate qualitative as well as quantitative aspects of the simulation output, the domain goals of LightGuider present a mixture of and domain goals, which are reached through the of both. As a secondary analysis objective we identify in the elaboration of alternative designs to illustrate different trade-offs.
{ "cite_N": [ "@cite_32" ], "mid": [ "1964473154" ], "abstract": [ "Various case studies in different application domains have shown the great potential of visual parameter space analysis to support validating and using simulation models. In order to guide and systematize research endeavors in this area, we provide a conceptual framework for visual parameter space analysis problems. The framework is based on our own experience and a structured analysis of the visualization literature. It contains three major components: (1) a data flow model that helps to abstractly describe visual parameter space analysis problems independent of their application domain; (2) a set of four navigation strategies of how parameter space analysis can be supported by visualization tools; and (3) a characterization of six analysis tasks. Based on our framework, we analyze and classify the current body of literature, and identify three open research gaps in visual parameter space analysis. The framework and its discussion are meant to support visualization designers and researchers in characterizing parameter space analysis problems and to guide their design and evaluation processes." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
When it comes to provenance information in visualization, @cite_27 give a comprehensive overview of the different types of provenance information (e.g., the history of data editing, the history of graphical views and visualization types, or the history of interactions) and different purposes of using them in the context of visualization (e.g., recall different states of the analysis, action recovery, or collaboration). However, there are varying approaches to visualize this information. The most common choice is presenting the provenance tree as a node-link diagram that shows the sequence of states and alternative branches of a workflow as described by @cite_29 .
{ "cite_N": [ "@cite_27", "@cite_29" ], "mid": [ "1959365993", "195157153" ], "abstract": [ "While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance information and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research.", "Data management is growing in complexity as large-scale applications take advantage of the loosely coupled resources brought together by grid middleware and by abundant storage capacity. Metadata describing the data products used in and generated by these applications is essential to disambiguate the data and enable reuse. Data provenance, one kind of metadata, pertains to the derivation history of a data product starting from its original sources. The provenance of data products generated by complex transformations such as workflows is of considerable value to scientists. From it, one can ascertain the quality of the data based on its ancestral data and derivations, track back sources of errors, allow automated re-enactment of derivations to update a data, and provide attribution of data sources. Provenance is also essential to the business domain where it can be used to drill down to the source of data in a data warehouse, track the creation of intellectual property, and provide an audit trail for regulatory purposes. In this paper we create a taxonomy of data provenance techniques, and apply the classification to current research efforts in the field. The main aspect of our taxonomy categorizes provenance systems based on why they record provenance, what they describe, how they represent and store provenance, and ways to disseminate it. Our synthesis can help those building scientific and business metadata-management systems to understand existing provenance system designs. The survey culminates with an identification of open research problems in the field." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
@cite_46 focus on the scalability of node-link diagrams for encoding a history of analysis workflows. They use filtering, node aggregation, as well as a user-interest driven expansion of nodes (i.e., a degree-of-interest function) to make the tree more comprehensible. In a different work, @cite_7 use a provenance tree for visualizing automatically recorded user interactions and visualizations. Again, they focus on the efficient retrieval of analysis states by offering different possibilities for querying the data (e.g., query by user-generated examples). These works offer sophisticated solutions to scalability problems of provenance trees in the form of node-link diagrams, as well as solutions for efficient interaction with large trees. However, they do not focus on integrating visual representations of additional information for each tree node. Our application scenario requires a quick visual comparison of multiple numerical variables (i.e., illumination constraints) for each state to enable the assessment of changes of quality for each lighting design action as well as trends of the lighting design process and of alternative workflows.
{ "cite_N": [ "@cite_46", "@cite_7" ], "mid": [ "2950879328", "2888018391" ], "abstract": [ "A major challenge in data-driven biomedical research lies in the collection and representation of data provenance information to ensure that findings are reproducibile. In order to communicate and reproduce multi-step analysis workflows executed on datasets that contain data for dozens or hundreds of samples, it is crucial to be able to visualize the provenance graph at different levels of aggregation. Most existing approaches are based on node-link diagrams, which do not scale to the complexity of typical data provenance graphs. In our proposed approach, we reduce the complexity of the graph using hierarchical and motif-based aggregation. Based on user action and graph attributes, a modular degree-of-interest (DoI) function is applied to expand parts of the graph that are relevant to the user. This interest-driven adaptive approach to provenance visualization allows users to review and communicate complex multi-step analyses, which can be based on hundreds of files that are processed by numerous workflows. We have integrated our approach into an analysis platform that captures extensive data provenance information, and demonstrate its effectiveness by means of a biomedical usage scenario.", "Storing analytical provenance generates a knowledge base with a large potential for recalling previous results and guiding users in future analyses. However, without extensive manual creation of meta information and annotations by the users, search and retrieval of analysis states can become tedious. We present KnowledgePearls, a solution for efficient retrieval of analysis states that are structured as provenance graphs containing automatically recorded user interactions and visualizations. As a core component, we describe a visual interface for querying and exploring analysis states based on their similarity to a partial definition of a requested analysis state. Depending on the use case, this definition may be provided explicitly by the user by formulating a search query or inferred from given reference states. We explain our approach using the example of efficient retrieval of demographic analyses by Hans Rosling and discuss our implementation for a fast look-up of previous states. Our approach is independent of the underlying visualization framework. We discuss the applicability for visualizations which are based on the declarative grammar Vega and we use a Vega-based implementation of Gapminder as guiding example. We additionally present a biomedical case study to illustrate how KnowledgePearls facilitates the exploration process by recalling states from earlier analyses." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
Besides node-link diagrams, there are examples of other visualization types used to show provenance information. Vi 'e @cite_16 , for instance, visualize the history of the editing that was applied to a Wikipedia (wikipedia.org) page in a flow-like visualization. This visualization is specifically designed to represent one page with text running from top to bottom, but the only interactions it supports (indirectly) are adding text'' and removing text''. Thus, it does not lend itself to our problem scenario. Another approach by @cite_40 shows the editing history of illustrations. They provide a superimposed visualization of two illustration states with before'' states rendered semi-transparent. Moreover, the illustration is augmented with arrows, icons, and color. Arrows and icons indicate spatial transformations of (parts of) the illustration, while color indicates user changes. This is a specialized design for the problem at hand and cannot be transferred to our application scenario.
{ "cite_N": [ "@cite_40", "@cite_16" ], "mid": [ "2102675982", "2111122424" ], "abstract": [ "Presentation and graphics software enables users to experiment with variations of illustrations. They can revisit recent editing operations using the ubiquitous undo command, but they are limited to sequential exploration. We propose a new interaction metaphor and visualization for operation history. While editing, a user can access a history mode in which actions are denoted by graphical depictions appearing on top of the document. Our work is inspired by the visual language of film storyboards and assembly instructions. Our storyboard provides an interactive visual history, summarizing the editing of a document or a selected object. Each view is composed of action depictions representing the user’s editing actions and enables the user to consider the operation history in context rather than in a disconnected list view. This metaphor provides instant access to any past action and we demonstrate that this is an intuitive interface to a selective undo mechanism.", "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
Guidance in visualization as defined by Ceneda at al. @cite_18 can be found in various forms and application scenarios. However, only a few approaches relate to our problem at hand. @cite_33 present a guidance approach to automatically generate a set of information-visualization designs appropriate for the given data and tasks. A selection of the most useful visualization mappings is input to the guidance mechanism, and influences future suggestions. O' @cite_0 present a similar approach that helps in creating graphic design layouts. The system interactively suggests changes in the position, scale, and alignment of elements that are placed on a page. Both systems present guidance approaches to optimize a design.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_33" ], "mid": [ "2131210874", "2488113179", "2151122362" ], "abstract": [ "Creating graphic designs can be challenging for novice users. This paper presents DesignScape, a system which aids the design process by making interactive layout suggestions, i.e., changes in the position, scale, and alignment of elements. The system uses two distinct but complementary types of suggestions: refinement suggestions, which improve the current layout, and brainstorming suggestions, which change the style. We investigate two interfaces for interacting with suggestions. First, we develop a suggestive interface, where suggestions are previewed and can be accepted. Second, we develop an adaptive interface where elements move automatically to improve the layout. We compare both interfaces with a baseline without suggestions, and show that for novice designers, both interfaces produce significantly better layouts, as evaluated by other novices.", "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.", "We study in this work how a user can be guided to find a relevant visualization in the context of visual data mining. We present a state of the art on the user assistance in visual and interactive methods. We propose a user assistant called VizAssist, which aims at improving the existing approaches along three directions: it uses simpler computational models of the visualizations and the visual perception guidelines, in order to facilitate the integration of new visualizations and the definition of a mapping heuristic. VizAssist allows the user to provide feedback in a visual and interactive way, with the aim of improving the data to visualization mapping. This step is performed with an interactive genetic algorithm. Finally, VizAssist aims at proposing a free on-line tool (www.vizassist.fr) that respects the privacy of the user data. This assistant can be viewed as a global interface between the user and some of the many visualizations that are implemented with D3js." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
@cite_45 present a guidance approach that helps to discover interesting data and patterns based on the system user's interests. They provide a system to extract, combine, refine, and visualize such findings of interest. They distinguish between user-driven and data-driven findings. In our work we combine user-driven (i.e., the user chooses which areas and which illumination constraints are more important than others) with data-driven (i.e., optimizing the current design with respect to specified illumination constraints) steering of the guidance suggestions.
{ "cite_N": [ "@cite_45" ], "mid": [ "2050035370" ], "abstract": [ "Visualization systems traditionally focus on graphical representation of information. They tend not to provide integrated analytical services that could aid users in tackling complex knowledge discovery tasks. Users' exploration in such environments is usually impeded due to several problems: 1) valuable information is hard to discover when too much data is visualized on the screen; 2) Users have to manage and organize their discoveries off line, because no systematic discovery management mechanism exists; 3) their discoveries based on visual exploration alone may lack accuracy; and 4)they have no convenient access to the important knowledge learned by other users. To tackle these problems, it has been recognized that analytical tools must be introduced into visualization systems. In this paper, we present a novel analysis-guided exploration system, called the Nugget Management System (NMS). It leverages the collaborative effort of human comprehensibility and machine computations to facilitate users' visual exploration processes. Specifically, NMS first helps users extract the valuable information (nuggets) hidden in datasets based on their interests. Given that similar nuggets may be rediscovered by different users, NMS consolidates the nugget candidate set by clustering based on their semantic similarity. To solve the problem of inaccurate discoveries, localized data mining techniques are applied to refine the nuggets to best represent the captured patterns in datasets. Visualization techniques are then employed to present our collected nugget pool and thus create the nugget view. Based on the nugget view, interaction techniques are designed to help users observe and organize the nuggets in a more intuitive manner and eventually faciliate their sense-making process. We integrated NMS into XmdvTool, a freeware multivariate visualization system. User studies were performed to compare the users' efficiency and accuracy in finishing tasks on real datasets, with and without the help of NMS. Our user studies confirmed the effectiveness of NMS." ] }
1907.08427
2964016487
Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art.
Person re-ID for still images has been extensively studied @cite_34 @cite_13 @cite_25 @cite_19 @cite_37 @cite_40 @cite_10 . Recently, researchers start to pay attention to video re-ID @cite_29 @cite_21 @cite_33 @cite_27 @cite_41 @cite_20 @cite_23 @cite_9 @cite_12 . McLaughlin al @cite_21 and Wu al @cite_33 proposed a basic pipeline for deep video re-ID. First, the frame features are extracted by convolutional neural network. Then a recurrent layer is applied to incorporate temporal context information into each frame. Finally, the temporal average pooling is adopted to obtain video representation. Wu al @cite_27 further proposed a temporal convolutional subnet to extract local motion information. These methods verify that the temporal information of video can help to identify the person. However, because these methods treat each frame of video equally, the frames with partial occlusion will distort the video representation.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_41", "@cite_10", "@cite_29", "@cite_21", "@cite_9", "@cite_19", "@cite_40", "@cite_27", "@cite_23", "@cite_12", "@cite_34", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2964163358", "2473702307", "2622829582", "2963842104", "2219504084", "", "2963216120", "2300840837", "2736410039", "", "2963960612", "", "1991452654", "2014764728", "2151873133", "2963736028" ], "abstract": [ "Person Re-identification (ReID) is to identify the same person across different cameras. It is a challenging task due to the large variations in person pose, occlusion, background clutter, etc. How to extract powerful features is a fundamental problem in ReID and is still an open problem today. In this paper, we design a Multi-Scale Context-Aware Network (MSCAN) to learn powerful features over full body and body parts, which can well capture the local context knowledge by stacking multi-scale convolutions in each layer. Moreover, instead of using predefined rigid parts, we propose to learn and localize deformable pedestrian parts using Spatial Transformer Networks (STN) with novel spatial constraints. The learned body parts can release some difficulties, e.g. pose variations and background clutters, in part-based representation. Finally, we integrate the representation learning processes of full body and body parts into a unified framework for person ReID through multi-class person identification tasks. Extensive evaluations on current challenging large-scale person ReID datasets, including the image-based Market1501, CUHK03 and sequence-based MARS datasets, show that the proposed method achieves the state-of-the-art results.", "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These low-level visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.", "Surveillance cameras have been widely used in different scenes. Accordingly, a demanding need is to recognize a person under different cameras, which is called person re-identification. This topic has gained increasing interests in computer vision recently. However, less attention has been paid to video-based approaches, compared with image-based ones. Two steps are usually involved in previous approaches, namely feature learning and metric learning. But most of the existing approaches only focus on either feature learning or metric learning. Meanwhile, many of them do not take full use of the temporal and spatial information. In this paper, we concentrate on video-based person re-identification and build an end-to-end deep neural network architecture to jointly learn features and metrics. The proposed method can automatically pick out the most discriminative frames in a given video by a temporal attention model. Moreover, it integrates the surrounding information at each location by a spatial recurrent model when measuring the similarity with another pedestrian video. That is, our method handles spatial and temporal information simultaneously in a unified manner. The carefully designed experiments on three public datasets show the effectiveness of each component of the proposed deep network, performing better in comparison with the state-of-the-art methods.", "Employing part-level features offers fine-grained information for pedestrian image description. A prerequisite of part discovery is that each part should be well located. Instead of using external resources like pose estimator, we consider content consistency within each part for precise part location. Specifically, we target at learning discriminative part-informed features for person retrieval and make two contributions. (i) A network named Part-based Convolutional Baseline (PCB). Given an image input, it outputs a convolutional descriptor consisting of several part-level features. With a uniform partition strategy, PCB achieves competitive results with the state-of-the-art methods, proving itself as a strong convolutional baseline for person retrieval. (ii) A refined part pooling (RPP) method. Uniform partition inevitably incurs outliers in each part, which are in fact more similar to other parts. RPP re-assigns these outliers to the parts they are closest to, resulting in refined parts with enhanced within-part consistency. Experiment confirms that RPP allows PCB to gain another round of performance boost. For instance, on the Market-1501 dataset, we achieve (77.4+4.2) mAP and (92.3+1.5) rank-1 accuracy, surpassing the state of the art by a large margin. Code is available at: https: github.com syfafterzy PCB_RPP", "Pedestrian re-identification is a difficult problem due to the large variations in a person's appearance caused by different poses and viewpoints, illumination changes, and occlusions. Spatial alignment is commonly used to address these issues by treating the appearance of different body parts independently. However, a body part can also appear differently during different phases of an action. In this paper we consider the temporal alignment problem, in addition to the spatial one, and propose a new approach that takes the video of a walking person as input and builds a spatio-temporal appearance representation for pedestrian re-identification. Particularly, given a video sequence we exploit the periodicity exhibited by a walking person to generate a spatio-temporal body-action model, which consists of a series of body-action units corresponding to certain action primitives of certain body parts. Fisher vectors are learned and extracted from individual body-action units and concatenated into the final representation of the walking person. Unlike previous spatio-temporal features that only take into account local dynamic appearance information, our representation aligns the spatio-temporal appearance of a pedestrian globally. Extensive experiments on public datasets show the effectiveness of our approach compared with the state of the art.", "", "This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at GitHub.", "Most existing person re-identification (re-id) methods focus on learning the optimal distance metrics across camera views. Typically a person's appearance is represented using features of thousands of dimensions, whilst only hundreds of training samples are available due to the difficulties in collecting matched training images. With the number of training samples much smaller than the feature dimension, the existing methods thus face the classic small sample size (SSS) problem and have to resort to dimensionality reduction techniques and or matrix regularisation, which lead to loss of discriminative power. In this work, we propose to overcome the SSS problem in re-id distance metric learning by matching people in a discriminative null space of the training data. In this null space, images of the same person are collapsed into a single point thus minimising the within-class scatter to the extreme and maximising the relative between-class separation simultaneously. Importantly, it has a fixed dimension, a closed-form solution and is very efficient to compute. Extensive experiments carried out on five person re-identification benchmarks including VIPeR, PRID2011, CUHK01, CUHK03 and Market1501 show that such a simple approach beats the state-of-the-art alternatives, often by a big margin.", "Person re-identification (ReID) is an important task in video surveillance and has various applications. It is non-trivial due to complex background clutters, varying illumination conditions, and uncontrollable camera settings. Moreover, the person body misalignment caused by detectors or pose variations is sometimes too severe for feature matching across images. In this study, we propose a novel Convolutional Neural Network (CNN), called Spindle Net, based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion. It is the first time human body structure information is considered in a CNN framework to facilitate feature learning. The proposed Spindle Net brings unique advantages: 1) it separately captures semantic features from different body regions thus the macro-and micro-body features can be well aligned across images, 2) the learned region features from different semantic regions are merged with a competitive scheme and discriminative features can be well preserved. State of the art performance can be achieved on multiple datasets by large margins. We further demonstrate the robustness and effectiveness of the proposed Spindle Net on our proposed dataset SenseReID without fine-tuning.", "", "Person Re-Identification (person re-id) is a crucial task as its applications in visual surveillance and human-computer interaction. In this work, we present a novel joint Spatial and Temporal Attention Pooling Network (ASTPN) for video-based person re-identification, which enables the feature extractor to be aware of the current input video sequences, in a way that interdependency from the matching items can directly influence the computation of each other's representation. Specifically, the spatial pooling layer is able to select regions from each frame, while the attention temporal pooling performed can select informative frames over the sequence, both pooling guided by the information from distance matching. Experiments are conduced on the iLIDS-VID, PRID-2011 and MARS datasets and the results demonstrate that this approach outperforms existing state-of-art methods. We also analyze how the joint pooling in both dimensions can boost the person re-id performance more effectively than using either of them separately 1.", "", "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.", "Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.", "This paper considers the person verification problem in modern surveillance and video retrieval systems. The problem is to identify whether a pair of face or human body images is about the same person, even if the person is not seen before. Traditional methods usually look for a distance (or similarity) measure between images (e.g., by metric learning algorithms), and make decisions based on a fixed threshold. We show that this is nevertheless insufficient and sub-optimal for the verification problem. This paper proposes to learn a decision function for verification that can be viewed as a joint model of a distance metric and a locally adaptive thresholding rule. We further formulate the inference on our decision function as a second-order large-margin regularization problem, and provide an efficient algorithm in its dual from. We evaluate our algorithm on both human body verification and face verification problems. Our method outperforms not only the classical metric learning algorithm including LMNN and ITML, but also the state-of-the-art in the computer vision community.", "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics." ] }
1907.08427
2964016487
Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art.
To handle partial occlusion, the attention based approaches are gaining popularity. Zhou al @cite_41 proposed a RNN temporal attention mechanism to select the most discriminative frames from video. Liu al @cite_9 used a convolutional subnet to predict quality score for each frame of video. Xu al @cite_23 presented a Spatial and Temporal Attention Pooling Network, where the spatial attention pooling layer selected discriminative regions from each frame and the temporal attention pooling selected informative frames in the sequence. Similarly, Li al @cite_20 used multiple spatial attention modules to localize distinctive body parts of person, and pooled these extracted local features across time with temporal attention.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_20", "@cite_23" ], "mid": [ "2622829582", "2963216120", "2963736028", "2963960612" ], "abstract": [ "Surveillance cameras have been widely used in different scenes. Accordingly, a demanding need is to recognize a person under different cameras, which is called person re-identification. This topic has gained increasing interests in computer vision recently. However, less attention has been paid to video-based approaches, compared with image-based ones. Two steps are usually involved in previous approaches, namely feature learning and metric learning. But most of the existing approaches only focus on either feature learning or metric learning. Meanwhile, many of them do not take full use of the temporal and spatial information. In this paper, we concentrate on video-based person re-identification and build an end-to-end deep neural network architecture to jointly learn features and metrics. The proposed method can automatically pick out the most discriminative frames in a given video by a temporal attention model. Moreover, it integrates the surrounding information at each location by a spatial recurrent model when measuring the similarity with another pedestrian video. That is, our method handles spatial and temporal information simultaneously in a unified manner. The carefully designed experiments on three public datasets show the effectiveness of each component of the proposed deep network, performing better in comparison with the state-of-the-art methods.", "This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at GitHub.", "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.", "Person Re-Identification (person re-id) is a crucial task as its applications in visual surveillance and human-computer interaction. In this work, we present a novel joint Spatial and Temporal Attention Pooling Network (ASTPN) for video-based person re-identification, which enables the feature extractor to be aware of the current input video sequences, in a way that interdependency from the matching items can directly influence the computation of each other's representation. Specifically, the spatial pooling layer is able to select regions from each frame, while the attention temporal pooling performed can select informative frames over the sequence, both pooling guided by the information from distance matching. Experiments are conduced on the iLIDS-VID, PRID-2011 and MARS datasets and the results demonstrate that this approach outperforms existing state-of-art methods. We also analyze how the joint pooling in both dimensions can boost the person re-id performance more effectively than using either of them separately 1." ] }
1907.08427
2964016487
Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art.
Image completion aims to fill the missing or masked regions in images with plausibly synthesized contents. It has many applications in photo editing, textual synthesis and computational photography. Early works @cite_32 @cite_14 attempted to solve the problem by matching and copying background patches into the missing regions. Recently, deep learning approaches based on Generative Adversarial Network (GAN) @cite_16 had emerged as a promising paradigm for image completion. Pathak al @cite_22 proposed Context Encoder that generated the contents of an arbitrary image region conditioned on its surroundings. It was trained with pixel-wise reconstruction and an adversarial loss, which produced sharper results than training the model with only reconstruction loss. Iizuka al @cite_28 improved @cite_22 by using dilated convolution @cite_36 to handle arbitrary resolutions. In @cite_28 , global and local discriminators were introduced as adversarial losses. The global discriminator pursued global consistency of the input image, while the local discriminator encouraged the generated parts to be valid. Our proposed STCnet builds on @cite_28 and extends it to exploit the temporal information of video by the proposed temporal attention module. In addition, STCnet employs a guider sub-network endowed with a re-ID cross-entropy loss to preserve the identities of the generated images.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_36", "@cite_32", "@cite_16" ], "mid": [ "1993120651", "2963420272", "2738588019", "2286929393", "2171011251", "2099471712" ], "abstract": [ "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
Ben- @cite_23 proposed generalization bounds for closed set domain adaptation. The bound represents that the performance of the target classifier depends on the performance of the source classifier and the discrepancy between the source and target domains. Many UCSDA methods @cite_26 @cite_11 @cite_3 have been proposed according to the theoretical bound and attempt to minimize the discrepancy between domains. We roughly separate these methods into two categories: feature matching and instance reweighting.
{ "cite_N": [ "@cite_26", "@cite_3", "@cite_23", "@cite_11" ], "mid": [ "2057266281", "2888851832", "2131953535", "2342085406" ], "abstract": [ "Visual domain adaptation, which learns an accurate classifier for a new domain using labeled images from an old domain, has shown promising value in computer vision yet still been a challenging problem. Most prior works have explored two learning strategies independently for domain adaptation: feature matching and instance reweighting. In this paper, we show that both strategies are important and inevitable when the domain difference is substantially large. We therefore put forward a novel Transfer Joint Matching (TJM) approach to model them in a unified optimization problem. Specifically, TJM aims to reduce the domain difference by jointly matching the features and reweighting the instances across domains in a principled dimensionality reduction procedure, and construct new feature representation that is invariant to both the distribution difference and the irrelevant instances. Comprehensive experimental results verify that TJM can significantly outperform competitive methods for cross-domain image recognition problems.", "In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space. Moreover, the discriminative ability of some invariant features such as shared background information is weak, and needs to be further filtered. Therefore, in this paper, we propose a novel domain adaption algorithm based on an explicit feature map and feature selection. The data are first represented by a kernel-induced explicit feature map, such that high-order invariant features can be revealed. Then, by minimizing the marginal distribution difference, conditional distribution difference, and the model error, the invariant discriminative features are effectively selected. This problem is NP-hard to be solved, and we propose to relax it and solve it by a cutting plane algorithm. Experimental results on six real-world benchmarks have demonstrated the effectiveness and efficiency of the proposed algorithm, which outperforms many state-of-the-art domain adaption approaches.", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.", "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
Feature matching aims to reduce the distribution discrepancy by learning a new feature representation. (TCA) @cite_35 learns a new feature space to match distributions by employing the (MMD) @cite_29 . (JDA) @cite_14 improves TCA by jointly matching marginal distributions and conditional distributions. (ARTL) @cite_20 considers a manifold regularization term @cite_18 to learn the geometric relations between domains, while matching distributions. (JGSA) @cite_6 not only considers the distribution discrepancy but also matches the geometric shift. Recent advances show that deep networks can be successfully applied to closed set domain adaptation tasks. (DAN) @cite_4 considers three adaptation layers for matching distributions and applies multiple kernels (MK-MMD) @cite_21 for adapting deep representations. (WDGRL) @cite_17 minimizes the distribution discrepancy by employing in neural networks.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_4", "@cite_29", "@cite_21", "@cite_6", "@cite_20", "@cite_17" ], "mid": [ "2115403315", "2104290444", "2096943734", "2159291411", "2212660284", "2110097068", "2616287544", "2100664256", "2963777311" ], "abstract": [ "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.", "Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.", "Given samples from distributions p and q, a two-sample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between embeddings of the probability distributions in a reproducing kernel Hilbert space. The kernel used in obtaining these embeddings is critical in ensuring the test has high power, and correctly distinguishes unlike distributions with high probability. A means of parameter selection for the two-sample test based on the MMD is proposed. For a given test level (an upper bound on the probability of making a Type I error), the kernel is chosen so as to maximize the test power, and minimize the probability of making a Type II error. The test statistic, test threshold, and optimization over the kernel parameters are obtained with cost linear in the sample size. These properties make the kernel selection and test procedures suited to data streams, where the observations cannot all be stored in memory. In experiments, the new kernel selection approach yields a more powerful test than earlier kernel selection heuristics.", "This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.", "Domain transfer learning, which learns a target classifier using labeled data from a different distribution, has shown promising value in knowledge discovery yet still been a challenging problem. Most previous works designed adaptive classifiers by exploring two learning strategies independently: distribution adaptation and label propagation. In this paper, we propose a novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model them in a unified way based on the structural risk minimization principle and the regularization theory. Specifically, ARTL learns the adaptive classifier by simultaneously optimizing the structural risk functional, the joint distribution matching between domains, and the manifold consistency underlying marginal distribution. Based on the framework, we propose two novel methods using Regularized Least Squares (RLS) and Support Vector Machines (SVMs), respectively, and use the Representer theorem in reproducing kernel Hilbert space to derive corresponding solutions. Comprehensive experiments verify that ARTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.", "" ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
The instance reweighting method reduces distribution discrepancy by weighting the source samples. (KMM) @cite_25 defines the weights as the density ratio between the source domain and the target domain. @cite_22 provided a theoretical analysis for important instance reweighting methods. However, when the domain discrepancy is substantially large, a large number of effective source samples will be down-weighted, resulting in the loss of effective information.
{ "cite_N": [ "@cite_22", "@cite_25" ], "mid": [ "2111272908", "2112483442" ], "abstract": [ "In real supervised learning scenarios, it is not uncommon that the training and test sample follow different probability distributions, thus rendering the necessity to correct the sampling bias. Focusing on a particular covariate shift problem, we derive high probability confidence bounds for the kernel mean matching (KMM) estimator, whose convergence rate turns out to depend on some regularity measure of the regression function and also on some capacity measure of the kernel. By comparing KMM with the natural plug-in estimator, we establish the superiority of the former hence provide concrete evidence understanding to the effectiveness of KMM under covariate shift.", "We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
When the source domain and target domain for known classes share the same distribution, the open set domain adaptation becomes . A common method for handling open set recognition relies on the use of threshold-based classification strategies @cite_30 . Establishing a threshold on the similarity score means rejecting distant samples from the training samples. (OSNN) @cite_36 recognizes whether a sample is from unknown classes by comparing the threshold with the ratio of similarity scores to the two most similar classes of the sample. Another trend relies on modifying (SVM) @cite_5 @cite_39 @cite_16 . (OSVM) @cite_16 uses a multi-class SVM as a basis to learn the unnormalized posterior probability which is used to reject unknown samples.
{ "cite_N": [ "@cite_30", "@cite_36", "@cite_39", "@cite_5", "@cite_16" ], "mid": [ "1520795124", "2248269543", "2785496677", "2018459374", "1032927584" ], "abstract": [ "The heart of designing and conducting evaluations and is the experimental protocol. The protocol states how an evaluation is to be conducted and how the results are to be computed. In this chapter we concentrate on describing the FERET and FRVT 2002 protocols. The FRVT 2002 evaluation protocol is based in the FERET evaluation protocols. The FRVT 2002 protocol is designed for biometric evaluations in general, not just for evaluating face recognition algorithms. These two evaluation protocol served as a basis for the FRVT 2006 and MBE 2010 evaluations.", "In this paper, we propose a novel multiclass classifier for the open-set recognition scenario. This scenario is the one in which there are no a priori training samples for some classes that might appear during testing. Usually, many applications are inherently open set. Consequently, successful closed-set solutions in the literature are not always suitable for real-world recognition problems. The proposed open-set classifier extends upon the Nearest-Neighbor (NN) classifier. Nearest neighbors are simple, parameter independent, multiclass, and widely used for closed-set problems. The proposed Open-Set NN (OSNN) method incorporates the ability of recognizing samples belonging to classes that are unknown at training time, being suitable for open-set recognition. In addition, we explore evaluation measures for open-set problems, properly measuring the resilience of methods to unknown classes during testing. For validation, we consider large freely-available benchmarks with different open-set recognition regimes and demonstrate that the proposed OSNN significantly outperforms their counterparts in the literature.", "Face Recognition is one of the most relevant problems in computer vision as we consider its importance to areas such as surveillance, forensics and psychology. Furthermore, open-set face recognition has a large room for improvement since only few researchers have focused on it. In fact, a real-world recognition system has to cope with several unseen individuals and determine whether a given face image is associated with a subject registered in a gallery of known individuals. In this work, we combine hashing functions and classification methods to estimate when probe samples are known (i.e., belong to the gallery set). We carry out experiments with partial least squares and neural networks and show how response value histograms tend to behave for known and unknown individuals whenever we test a probe sample. In addition, we conduct experiments on FRGCv1, PubFig83 and VGGFace to show that our method continues effective regardless of the dataset difficulty.", "Real-world tasks in computer vision often touch upon open set recognition: multi-class recognition with incomplete knowledge of the world and many unknown inputs. Recent work on this problem has proposed a model incorporating an open space risk term to account for the space beyond the reasonable support of known classes. This paper extends the general idea of open space risk limiting classification to accommodate non-linear classifiers in a multiclass setting. We introduce a new open set recognition model called compact abating probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical extreme value theory for score calibration with one-class and binary support vector machines. Our experiments show that the W-SVM is significantly better for open set object detection and OCR problems when compared to the state-of-the-art for the same tasks.", "The perceived success of recent visual recognition approaches has largely been derived from their performance on classification tasks, where all possible classes are known at training time. But what about open set problems, where unknown classes appear at test time? Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under an assumption of incomplete class knowledge. In this paper, we formulate the problem as one of modeling positive training data at the decision boundary, where we can invoke the statistical extreme value theory. A new algorithm called the P I -SVM is introduced for estimating the unnormalized posterior probability of class inclusion." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
The open set domain adaptation problem was proposed by Assign-and-Transform-Iteratively (ATI- @math ) @cite_41 . Using @math distance between each target sample and the center of each source class, ATI- @math constructs a constraint integer programming to recognize unknown target samples @math , then learns a linear transformation to match the source domain and target domain excluding @math . However, ATI- @math requires the help of unknown source samples, which are unavailable in our setting. Recently, a deep learning method, Open Set Back Propagation (OSBP) @cite_19 , has been proposed. OSBP relies on adversarial neural network and a binary cross entropy loss to learn the probability of target samples, then uses the estimated probability to separate unknown target classes samples. However, we have not found any paper that considers the generalization bound for open set domain adaptation. In this paper, we complete the blank in open set domain adaptation theory.
{ "cite_N": [ "@cite_41", "@cite_19" ], "mid": [ "2779610669", "2798593490" ], "abstract": [ "When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.", "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings." ] }
1907.08307
2963457350
The term Neural Architecture Search (NAS) refers to the automatic optimization of network architectures for a new, previously unknown task. Since testing an architecture is computationally very expensive, many optimizers need days or even weeks to find suitable architectures. However, this search time can be significantly reduced if knowledge from previous searches on different tasks is reused. In this work, we propose a generally applicable framework that introduces only minor changes to existing optimizers to leverage this feature. As an example, we select an existing optimizer and demonstrate the complexity of the integration of the framework as well as its impact. In experiments on CIFAR-10 and CIFAR-100, we observe a reduction in the search time from 200 to only 6 GPU days, a speed up by a factor of 33. In addition, we observe new records of 1.99 and 14.06 for NAS optimizers on the CIFAR benchmarks, respectively. In a separate study, we analyze the impact of the amount of source and target data. Empirically, we demonstrate that the proposed framework generally gives better results and, in the worst case, is just as good as the unmodified optimizer.
Neural Architecture Search (NAS), the structural optimization of neural networks, is solved with a variety of optimization techniques. These include reinforcement learning @cite_3 @cite_0 @cite_4 @cite_13 @cite_2 @cite_27 @cite_11 , evolutionary algorithms @cite_17 @cite_33 @cite_34 @cite_15 , and surrogate model-based optimization @cite_21 @cite_16 . These techniques have made great advancements with the idea of sharing weights across different architectures which are sampled during the search process @cite_18 @cite_22 @cite_32 @cite_23 @cite_30 instead of training them from scratch. For a detailed overview we refer to a recent survey @cite_12 .
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_3", "@cite_2", "@cite_15", "@cite_18", "@cite_4", "@cite_21", "@cite_23", "@cite_17", "@cite_32", "@cite_27", "@cite_16", "@cite_34", "@cite_12", "@cite_33", "@cite_0", "@cite_13", "@cite_11" ], "mid": [ "2964259004", "2885820039", "2963374479", "2796265726", "2914037680", "2962746461", "2963536136", "2963821229", "2962847160", "2594529350", "2810075754", "2963473542", "", "2904817185", "2888429796", "2963778169", "2556833785", "2964081807", "2914766226" ], "abstract": [ "", "", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.", "The design of convolutional neural network architectures for a new image data set is a laborious and computational expensive task which requires expert knowledge. We propose a novel neuro-evolutionary technique to solve this problem without human interference. Our method assumes that a convolutional neural network architecture is a sequence of neuro-cells and keeps mutating them using function-preserving operations. This novel combination of approaches has several advantages. We define the network architecture by a sequence of repeating neuro-cells which reduces the search space complexity. Furthermore, these cells are possibly transferable and can be used in order to arbitrarily extend the complexity of the network. Mutations based on function-preserving operations guarantee better parameter initialization than random initialization such that less training time is required per network architecture. Our proposed method finds within 12 GPU hours neural network architectures that can achieve a classification error of about 4 and 24 with only 5.5 and 6.5 million parameters on CIFAR-10 and CIFAR-100, respectively. In comparison to competitor approaches, our method provides similar competitive results but requires orders of magnitudes less search time and in many cases less network parameters.", "", "", "We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.", "", "Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6 (95.6 for ensemble) and 77.0 , respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.", "", "", "", "", "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods. Furthermore, the computational resource is 10 times fewer than typical methods based on RL and EA.", "We explore efficient neural architecture search methods and present a simple yet powerful evolutionary algorithm that can discover new architectures achieving state of the art results. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6 on CIFAR-10 and 20.3 when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches and represents the new state of the art for evolutionary strategies on this task. We also present results using random search, achieving 0.3 less top-1 accuracy on CIFAR-10 and 0.1 less on ImageNet whilst reducing the architecture search time from 36 hours down to 1 hour.", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "The design of neural network architectures for a new data set is a laborious task which requires human deep learning expertise. In order to make deep learning available for a broader audience, automated methods for finding a neural network architecture are vital. Recently proposed methods can already achieve human expert level performances. However, these methods have run times of months or even years of GPU computing time, ignoring hardware constraints as faced by many researchers and companies. We propose the use of Monte Carlo planning in combination with two different UCT (upper confidence bound applied to trees) derivations to search for network architectures. We adapt the UCT algorithm to the needs of network architecture search by proposing two ways of sharing information between different branches of the search tree. In an empirical study we are able to demonstrate that this method is able to find competitive networks for MNIST, SVHN and CIFAR-10 in just a single GPU day. Extending the search time to five GPU days, we are able to outperform man-made architectures and our competitors which consider the same types of layers." ] }
1907.08116
2956997324
Designing fast and reliable distributed consensus protocols is a key to enabling mission-critical and real-time controls of industrial Internet of Things (IIoT) nodes communicating over wireless links. However, chasing both low-latency and reliability of a consensus protocol at once is a challenging task. The problem is even aggravated under wireless connectivity that is slower and less reliable, compared to wired connections presumed in traditional consensus protocols. To tackle this issue, we investigate fundamental relationships between consensus latency and reliability under wireless connectivity, and thereby co-design communication and consensus protocols for low-latency and reliable IIoT systems. Specifically, we propose a novel communication-efficient distributed consensus protocol, termed Random Representative Consensus (R2C), and show its effectiveness under gossip and broadcast communication protocols. To this end, we derive closed-form end-to-end (E2E) latency expression of R2C that guarantees a target reliability, and compare this with a baseline consensus protocol, referred to as Referendum Consensus (RC).
Nonetheless, most of the aforementioned algorithms postulate that nodes are communicating over fast and reliable wired links. To support large-scale systems, wireless connectivity is mandatory in consensus operations, and its impact on consensus reliability and latency should be carefully examined. On this account, wireless distributed consensus protocols have recently been studied in several works @cite_31 @cite_29 @cite_15 @cite_23 @cite_0 @cite_18 @cite_25 @cite_20 . For instance, a Hashgraph-motivated wireless distributed consensus protocol has been introduced in @cite_31 , in the context of distributed wireless spectrum access applications. For power grid applications, an Ethereum-based smart contract structure and its operation protocol has been studied in @cite_9 .
{ "cite_N": [ "@cite_18", "@cite_15", "@cite_29", "@cite_9", "@cite_0", "@cite_23", "@cite_31", "@cite_25", "@cite_20" ], "mid": [ "", "2922250870", "2883173846", "2908039420", "", "2921088707", "2888550303", "2954103930", "2953611374" ], "abstract": [ "", "While the intersection of blockchain and Industrial Internet of Things (IIoT) has received considerable research interest lately, the conflict between the high resource requirements of blockchain and the generally inadequate performance of IIoT devices has not been well tackled. On one hand, due to the introductions of mathematical concepts, including Public Key Infrastructure, Merkle Hash Tree, and Proof of Work (PoW), deploying blockchain demands huge computing power. On the other hand, full nodes should synchronize massive block data and deal with numerous transactions in peer-to-peer network, whose occupation of storage capacity and bandwidth makes IIoT devices difficult to afford. In this paper, we propose a lightweight blockchain system called LightChain , which is resource-efficient and suitable for power-constrained IIoT scenarios. Specifically, we present a green consensus mechanism named Synergistic Multiple Proof for stimulating the cooperation of IIoT devices, and a lightweight data structure called LightBlock to streamline broadcast content. Furthermore, we design a novel Unrelated Block Offloading Filter to avoid the unlimited growth of ledger without affecting blockchain's traceability. The extensive experiments demonstrate that LightChain can reduce the individual computational cost to 39.32 and speed up the block generation by up to 74.06 . In terms of storage and network usage, the reductions are 43.35 and 90.55 , respectively.", "The emerging blockchain protocols provide a decentralized architecture that is suitable of supporting Internet of Things (IoT) interactions. However, keeping a local copy of the blockchain ledger is infeasible for low-power and memory-constrained devices. For this reason, they are equipped with lightweight software implementations that only download the useful data structures, e.g., state of accounts, from the blockchain network, when they are updated. In this paper, we consider and analyze a novel scheme, implemented by the nodes of the blockchain network, which aggregates the blockchain data in periodic updates and further reduces the communication cost of the connected IoT devices. We show that the aggregation period should be selected based on the channel quality, the offered rate, and the statistics of updates of the useful data structures. The results, obtained for the Ethereum protocol, illustrate the benefits of the aggregation scheme in terms of a reduced duty cycle of the device, particularly for low signal-to-noise ratios, and the overall reduction of the amount of information transmitted in downlink from the wireless base station to the IoT device. A potential application of the proposed scheme is to let the IoT device request more information than actually needed, hence increasing its privacy, while keeping the communication cost constant. In the conclusion, this paper is the first to provide rigorous guidelines for the design of lightweight blockchain protocols with wireless connectivity.", "In the power grid, the Balance Responsible Parties (BRPs) purchase energy based on a forecast of the user consumption. The forecasts are imperfect, and the corrections of their real-time deviations are managed by a System Operator (SO), which charges the BRPs for the procured imbalances. Flexible consumers, associated with a BRP, can be involved in a demand response (DR) program to reduce the imbalance costs. However, running the DR program requires the BRP to invest resources in the infrastructure and increases its operating costs. To limit the intervention of BRP, we implement the DR via a blockchain smart contract. Moreover, to reduce the delay of publication of the imbalance price, caused by the inefficient accounting process of the current balancing markets, a second blockchain is adopted at the SO layer, procuring a fast and auditable credit settlements. The feasibility of the proposed architecture is evaluated over an Ethereum blockchain platform. The results show that block chains can enable a high automation of the balancing market, by providing (i) the implementation of aggregators with low operating cost and (ii) the timely and transparent access to the balancing information, thus fostering new business models for the BRPs.", "", "Blockchain technologies have recently come to the forefront of the research and industrial communities as they bring potential benefits for many industries. This is due to their practical capabilities in solving many issues currently inhibiting further advances in various industrial domains. Securely recording and sharing transactional data, establishing automated and efficient supply chain processes, and enhancing transparency across the whole value chain are some examples of these issues. Blockchain offers an effective way to tackle these issues using distributed, shared, secure, and permissioned transactional ledgers. The employment of blockchain technologies and the possibility of applying them in different situations enables many industrial applications through increased efficiency and security; enhanced traceability and transparency; and reduced costs. In this paper, different industrial application domains where the use of blockchain technologies has been proposed are reviewed. This paper explores the opportunities, benefits, and challenges of incorporating blockchain in different industrial applications. Furthermore, the paper attempts to identify the requirements that support the implementation of blockchain for different industrial applications. The review reveals that several opportunities are available for utilizing blockchain in various industrial sectors; however, there are still some challenges to be addressed to achieve better utilization of this technology.", "This paper proposes Consensus-Before-Talk (CBT), a spectrum etiquette architecture leveraged by distributed ledger technology (DLT). In CBT, secondary users’ spectrum access requests reach a consensus in a distributed way, thereby enabling collision-free distributed dynamic spectrum access. To achieve this consensus, the secondary users need to pay for the extra request exchanging delays. Incorporating the consensus delay, the end-to-end latency under CBT is investigated. Both the latency analysis and numerical evaluation validate that the proposed CBT achieves the lower end-to-end latency particularly under severe secondary user traffic, compared to the Listen-Before-Talk (LBT) benchmark scheme.", "The rapid development of blockchain technology and their numerous emerging applications has received huge attention in recent years. The distributed consensus mechanism is the backbone of a blockchain network. It plays a key role in ensuring the network's security, integrity, and performance. Most current blockchain networks have been deploying the proof-of-work consensus mechanisms, in which the consensus is reached through intensive mining processes. However, this mechanism has several limitations, e.g., energy inefficiency, delay, and vulnerable to security threats. To overcome these problems, a new consensus mechanism has been developed recently, namely proof of stake, which enables to achieve the consensus via proving the stake ownership. This mechanism is expected to become a cutting-edge technology for future blockchain networks. This paper is dedicated to investigating proof-of-stake mechanisms, from fundamental knowledge to advanced proof-of-stake-based protocols along with performance analysis, e.g., energy consumption, delay, and security, as well as their promising applications, particularly in the field of Internet of Vehicles. The formation of stake pools and their effects on the network stake distribution are also analyzed and simulated. The results show that the ratio between the block reward and the total network stake has a significant impact on the decentralization of the network. Technical challenges and potential solutions are also discussed.", "Internet-of-Things (IoT) companies strive to get feedback from users to improve their products and services. However, traditional surveys cannot reflect the actual conditions of customers' due to the limited questions. Besides, survey results are affected by various subjective factors. In contrast, the recorded usages of IoT devices reflect customers' behaviours more comprehensively and accurately. We design an intelligent system to help IoT device manufacturers to take advantage of customers' data and build a machine learning model to predict customers' requirements and possible consumption behaviours with federated learning (FL) technology. The FL consists of two stages: in the first stage, customers train the initial model using the phone and the edge computing server collaboratively. The mobile edge computing server's high computation power can assist customers' training locally. Customers first collect data from various IoT devices using phones, and then download and train the initial model with their data. During the training, customers first extract features using their mobiles, and then add the Laplacian noise to the extracted features based on differential privacy, a formal and popular notion to quantify privacy. After achieving the local model, customers sign on their models respectively and send them to the blockchain. We use the blockchain to replace the centralized aggregator which belongs to the third party in FL. In the second stage, miners calculate the averaged model using the collected models sent from customers. By the end of the crowdsourcing job, one of the miners, who is selected as the temporary leader, uploads the model to the blockchain. Besides, to attract more customers to participate in the crowdsourcing FL, we design an incentive mechanism, which awards participants with coins that can be used to purchase other services provided by the company." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
The sample complexity of OP transforms @math has been largely pinned down by the compressed sensing literature. For example, suppose that @math is any orthogonal and sufficiently flat matrix, in the sense that none of the entries of @math are too large. Then a result of Rudelson and Vershynin (and a sharpening of their result by Bourgain) shows that @math samples suffice to establish that the matrix @math (which is made up of @math sampled rows from @math ) has the Restricted Isometry Property (RIP) @cite_0 @cite_4 . Finding @math from samples of @math of corresponds to the problem of finding an (approximately) @math -sparse vector @math from the linear measurements @math , which is precisely the compressed sensing problem. It is known that if @math satisfies the RIP, then this can be solved (for example with @math minimization) in time @math . We note that very recently a result due to B show that this is essentially tight, in that @math queries (for a certain range of @math ) to @math are not enough to compute a @math -sparse approximation of @math @cite_16 . Bounds specific to the DFT over finite fields can be found in @cite_30 .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_4", "@cite_30" ], "mid": [ "641843003", "2923536738", "2055064119", "2924908021" ], "abstract": [ "It is shown that for the n × n-Hadamard matrix (or, more generally, a bounded orthogonal matrix) the RIP-property for r-space vectors holds, with row restriction to a set S of size @math This bound represents a slight improvement over (Rudelson and Vershynin, Commun Pure Appl Math 61:1025–1045, 2008) in that the power of the logarithm is decreased by one unit.", "We give a short argument that yields a new lower bound on the number of subsampled rows from a bounded, orthonormal matrix necessary to form a matrix with the restricted isometry property. We show that for a @math Hadamard matrix, one cannot recover all @math -sparse vectors unless the number of subsampled rows is @math .", "This paper improves upon best-known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly nonconvex problem to a convex problem and then solve it as a linear program. We show that there exists a set of frequencies Ω such that one can exactly reconstruct every r-sparse signal f of length n from its frequencies in Ω, using the convex relaxation, and Ω has size k(r, n) = O(r log(n)·log 2 (r) log(r logn)) = O(r log 4 n ). A random set Ω satisfies this with high probability. This estimate is optimal within the log log n and log 3 r factors. We also give a relatively short argument for a similar problem with k(r, n) ≈ r[12 + 8 log(n r)] Gaussian measurements. We use methods of geometric functional analysis and probability theory in Banach spaces, which makes our arguments quite short.", "Let @math be an @math Fourier matrix over @math for some prime @math . We improve upon known lower bounds for the number of rows of @math that must be sampled so that the resulting matrix @math satisfies the restricted isometry property for @math -sparse vectors. This property states that @math is approximately @math for all @math -sparse vectors @math . In particular, if @math , we show that @math rows must be sampled to satisfy the restricted isometry property with constant probability." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
Rauhut and Ward @cite_13 show that for Jacobi polynomial transforms if the evaluation points were picked according to the Chebyshev measure , then with @math random measurements, the corresponding matrix has the RIP (note that the Foucart and Rauhut sample the evaluation points according to the measure of orthogonality for the Jacobi polynomials, which in general is not the Chebyshev measure). This result again does not give a sub-linear time algorithm but was used in the result of @cite_26 which we describe below.
{ "cite_N": [ "@cite_26", "@cite_13" ], "mid": [ "2963520188", "2141454789" ], "abstract": [ "In this paper, we propose a general strategy for rapidly computing sparse Legendre expansions. The resulting methods yield a new class of fast algorithms capable of approximating a given function f : [ź1, 1] ź ź with a near-optimal linear combination of s Legendre polynomials of degree ≤ N in just (slogN)O(1) @math -time. When s ź N, these algorithms exhibit sublinear runtime complexities in N, as opposed to traditional Ω(NlogN)-time methods for computing all of the first N Legendre coefficients of f. Theoretical as well as numerical results demonstrate the effectiveness of the proposed methods.", "We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre s-sparse polynomial of maximal degree N can be recovered from [email protected]?slog^4(N) random samples that are chosen independently according to the Chebyshev probability measure [email protected](x)[email protected]^-^1(1-x^2)^-^1^ ^2dx. As an efficient recovery method, @?\"1-minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinite-dimensional function spaces." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
While these approaches can give near-optimal sample complexity, they do not give sublinear-time algorithms. In fact, it is faster to compute @math exactly by computing @math , if we care only about the running time and not about sample complexity @cite_8 . Thus, we turn our attention to sublinear-time algorithms.
{ "cite_N": [ "@cite_8" ], "mid": [ "2149599446" ], "abstract": [ "Let @math denote a set of polynomials with complex coefficients. Let @math denote any set of sample points . For any @math , the discrete polynomial transform of @math (with respect to @math and @math ) is defined as the collection of sums, @math , where @math for some associated weight function @math . These sorts of transforms find important applications in areas such as medical imaging and signal processing. In this paper, we present fast algorithms for computing discrete orthogonal polynomial transforms. For a system of @math orthogonal polynomials of degree at most @math , we give an @math algorithm for computing a discrete polynomial transform at an arbitrary set of points instead of the @math operations required by direct evaluation. Our algorithm depends only on the fact that orthogonal polynomial sets satisfy a three-term recurrence and thus it may be applied to any such set of discretely sampled functions. In particular, sampled orthogonal polynomials generate the vector space of functions on a distance transitive graph. As a direct application of our work, we are able to give a fast algorithm for computing subspace decompositions of this vector space which respect the action of the symmetry group of such a graph. This has direct applications to treating computational bottlenecks in the spectral analysis of data on distance transitive graphs, and we discuss this in some detail." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
There have been several works generalizing and building on the sFFT results mentioned above. One direction is to the multi-dimensional DFT (for example in @cite_27 @cite_9 ). Another direction is to apply the sFFT framework to orthogonal polynomials with similar structure. One example is Chebyshev polynomials and the Discrete Cosine Transform (DCT). It was observed in @cite_26 (also see Appendix ) that this can be reduced to sFFT in a black box manner, solving the sparse recovery problem for Chebyshev polynomials and the DCT. A second example of OP transforms which can essentially be reduced to the sFFT is Legendre polynomials. @cite_26 seek to recover an unknown @math -term Legendre polynomial (with highest magnitude degree limited to be @math ), defined on @math , from samples. They give a sublinear two-phase algorithm: in the first phase, they reduce @math -sparse-Legendre to sFFT to identify a set of candidate Legendre polynomials. The second phase uses the RIP result for BOS to produce a matrix that is used to estimate the coefficients of the candidate Legendre polynomials. We note that in this work the setting is naturally continuous, while ours is discrete.
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_26" ], "mid": [ "2903067870", "2082051116", "2963520188" ], "abstract": [ "", "We give an algorithm for l2 l2 sparse recovery from Fourier measurements using O(klog N) samples, matching the lower bound of Do Ba-Indyk-Price-Woodruff'10 for non-adaptive algorithms up to constant factors for any k≤ N1-δ. The algorithm runs in tilde O(N) time. Our algorithm extends to higher dimensions, leading to sample complexity of Od(klog N), which is optimal up to constant factors for any d = O(1). These are the first sample optimal algorithms for these problems. A preliminary experimental evaluation indicates that our algorithm has empirical sampling complexity comparable to that of other recovery methods known in the literature, while providing strong provable guarantees on the recovery quality.", "In this paper, we propose a general strategy for rapidly computing sparse Legendre expansions. The resulting methods yield a new class of fast algorithms capable of approximating a given function f : [ź1, 1] ź ź with a near-optimal linear combination of s Legendre polynomials of degree ≤ N in just (slogN)O(1) @math -time. When s ź N, these algorithms exhibit sublinear runtime complexities in N, as opposed to traditional Ω(NlogN)-time methods for computing all of the first N Legendre coefficients of f. Theoretical as well as numerical results demonstrate the effectiveness of the proposed methods." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
@cite_18 study higher dimensions and obtain sublinear-time algorithms for more general harmonic expansions in multiple dimensions. The results of @cite_18 complement our work. More precisely, that work shows how to use any algorithm for a univariate polynomial transform to design an algorithm for a multi-variate polynomial transform where the multi-variate polynomials are products of univariate polynomials in the individual variables. Thus our improvements for univariate polynomial transforms can be used with @cite_18 .
{ "cite_N": [ "@cite_18" ], "mid": [ "2885396681" ], "abstract": [ "We develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification(UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases. We expect that our results provide a starting point for a new line of research on sublinear-time solution techniques for UQ applications of the type above which will eventually be able to scale to significantly higher-dimensional problems than what are currently computationally feasible. More concretely, let @math be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality @math . We will develop methods that approximate any function @math that is sparse in the BOPB, that is, @math of the form @math with @math of cardinality @math . Our method has a runtime of just @math , uses only @math function evaluations on a fixed and nonadaptive grid, and not more than @math bits of memory. For @math , the runtime @math will be less than what is required to simply enumerate the elements of the basis @math ; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as \"sublinear-time\". This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size @math ." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
Finally, there are sparse OP transforms based on Prony's method. The work @cite_20 extends Prony's method to a very general setting, including Jacobi polynomials, and gives an algorithm that requires only @math queries to recover exactly @math -sparse polynomials. However, these general results work only for exact sparsity and are in general not robust to noise. There has been work extending and modifying these techniques to settings with noise (for example, @cite_11 @cite_31 ), but to the best of our knowledge the only provable results for noise are for either the sFFT or closely related polynomial families. We note that @cite_32 presents a Prony-like algorithm for Legendre and Gegenbauer polynomials and demonstrates empirically that this algorithm is robust to noise, although they do not address the question theoretically.
{ "cite_N": [ "@cite_31", "@cite_32", "@cite_20", "@cite_11" ], "mid": [ "1978356474", "2211482116", "2022821885", "2065656021" ], "abstract": [ "The recovery of signal parameters from noisy sampled data is a fundamental problem in digital signal processing. In this paper, we consider the following spectral analysis problem: Let f be a real-valued sum of complex exponentials. Determine all parameters of f, i.e., all different frequencies, all coefficients, and the number of exponentials from finitely many equispaced sampled data of f. This is a nonlinear inverse problem. In this paper, we present new results on an approximate Prony method (APM) which is based on [1]. In contrast to [1], we apply matrix perturbation theory such that we can describe the properties and the numerical behavior of the APM in detail. The number of sampled data acts as regularization parameter. The first part of APM estimates the frequencies and the second part solves an overdetermined linear Vandermonde-type system in a stable way. We compare the first part of APM also with the known ESPRIT method. The second part is related to the nonequispaced fast Fourier transform (NFFT). Numerical experiments show the performance of our method.", "We present a new deterministic approximate algorithm for the reconstruction of sparse Legendre expansions from a small number of given samples. Using asymptotic properties of Legendre polynomials, this reconstruction is based on Prony-like methods. The method proposed is robust with respect to noisy sampled data. Furthermore we show that the suggested method can be extended to the reconstruction of sparse Gegenbauer expansions of low positive order.", "We derive a new generalization of Prony?s method to reconstruct M-sparse expansions of (generalized) eigenfunctions of linear operators from only suitable values in a deterministic way. The proposed method covers the well-known reconstruction methods for M-sparse sums of exponentials as well as for the interpolation of M-sparse polynomials by using special linear operators in . Further, we can derive new reconstruction formulas for M-sparse expansions of orthogonal polynomials using the Sturm?Liouville operator. The method is also applied to the recovery of M-sparse vectors in finite-dimensional vector spaces.", "A study of a matrix pencil method for estimating frequencies and damping factors of exponentially damped and or undamped sinusoids in noise is presented. Comparison of this method to a polynomial method (SVD-Prony method) shows that the matrix pencil method and the polynomial method are two special cases of a matrix prediction approach and that the pencil method is more efficient in computation and less restrictive about signal probes. It is found through perturbation analysis and simulation that, for signals with unknown damping factors, the pencil method is less sensitive to noise than the polynomial method. An expression of the Cramer-Rao bound for the exponential signals is presented. >" ] }
1907.08302
2964151964
With the demand to process ever-growing data volumes, a variety of new data stream processing frameworks have been developed. Moving an implementation from one such system to another, e.g., for performance reasons, requires adapting existing applications to new interfaces. Apache Beam addresses these high substitution costs by providing an abstraction layer that enables executing programs on any of the supported streaming frameworks. In this paper, we present a novel benchmark architecture for comparing the performance impact of using Apache Beam on three streaming frameworks: Apache Spark Streaming, Apache Flink, and Apache Apex. We find significant performance penalties when using Apache Beam for application development in the surveyed systems. Overall, usage of Apache Beam for the examined streaming applications caused a high variance of query execution times with a slowdown of up to a factor of 58 compared to queries developed without the abstraction layer. All developed benchmark artifacts are publicly available to ensure reproducible results.
With respect to benchmarking DSPSs in general, the Linear Road benchmark by @cite_42 is a very well-known work. It is an application benchmark that provides a benchmarking toolkit. This toolkit consists of a data generator, a data sender, and a result validator. The underlying idea of the benchmark is a variable tolling system for a metropolitan area. This area covers multiple expressways with moving vehicles. The amount of accumulated tolls depends on various aspects concerning the traffic situation.
{ "cite_N": [ "@cite_42" ], "mid": [ "2112215401" ], "abstract": [ "This paper specifies the Linear Road Benchmark for Stream Data Management Systems (SDMS). Stream Data Management Systems process streaming data by executing continuous and historical queries while producing query results in real-time. This benchmark makes it possible to compare the performance characteristics of SDMS' relative to each other and to alternative (e.g., Relational Database) systems. Linear Road has been endorsed as an SDMS benchmark by the developers of both the Aurora [1] (out of Brandeis University, Brown University and MIT) and STREAM [8] (out of Stanford University) stream systems. Linear Road simulates a toll system for the motor vehicle expressways of a large metropolitan area. The tolling system uses \"variable tolling\" [6, 11, 9]: an increasingly prevalent tolling technique that uses such dynamic factors as traffic congestion and accident proximity to calculate toll charges. Linear Road specifies a variable tolling system for a fictional urban area including such features as accident detection and alerts, traffic congestion measurements, toll calculations and historical queries. After specifying the benchmark, we describe experimental results involving two implementations: one using a commercially available Relational Database and the other using Aurora. Our results show that a dedicated Stream Data Management System can outperform a Relational Database by at least a factor of 5 on streaming data applications." ] }